Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azblob AuthorizationFailure - This request is not authorized to perform this operation. #1060

Open
jonaski opened this issue Dec 10, 2024 · 4 comments
Assignees
Milestone

Comments

@jonaski
Copy link

jonaski commented Dec 10, 2024

When using azblob, clickhouse-backup converts the "T" between the date and the time to lowercase, ie.: "2024-12-09t14" instead of "2024-12-09T14", which results in authorization error.
It also adds "restype=container" to the SAS token specified for "sas", which leads to the error "The requested URI does not represent any resource on the server".

2024-12-10 09:52:14.053 FTL cmd/clickhouse-backup/main.go:668 > error="-> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, github.com/Azure/azure-storage-blob-go@v0.15.0/azblob/zc_storage_error.go:42
===== RESPONSE ERROR (ServiceCode=AuthorizationFailure) =====
Description=This request is not authorized to perform this operation.
RequestId:d3d0da5a-401e-0040-55e9-4a6ec5000000
Time:2024-12-10T09:52:14.0497061Z, Details: 
   Code: AuthorizationFailure
   PUT https://REDACTED.blob.core.windows.net/clickhouse?restype=container&se=2027-01-01t22%3A33%3A25z&sig=REDACTED&sp=racwdl&spr=https&sr=c&st=2024-12-09t14%3A33%3A25z&sv=2022-11-02&timeout=14401
   User-Agent: [Azure-Storage/0.15 (go1.23.3; linux)]
   X-Ms-Client-Request-Id: [59584aef-f5e9-42d2-4df5-204010a90cd1]
   X-Ms-Version: [2020-10-02]
   --------------------------------------------------------------------------------
   RESPONSE Status: 403 This request is not authorized to perform this operation.
   Content-Length: [246]
   Content-Type: [application/xml]
   Date: [Tue, 10 Dec 2024 09:52:13 GMT]
   Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
   X-Ms-Client-Request-Id: [59584aef-f5e9-42d2-4df5-204010a90cd1]
   X-Ms-Error-Code: [AuthorizationFailure]
   X-Ms-Request-Id: [d3d0da5a-401e-0040-55e9-4a6ec5000000]
   X-Ms-Version: [2020-10-02]


"
@Slach
Copy link
Collaborator

Slach commented Dec 10, 2024

could you share your configuration with edited sensitive credentials?

clickhouse-backup version
clickhouse-backup print-config

@jonaski
Copy link
Author

jonaski commented Dec 10, 2024

Version 2.6.4

clickhouse-7956797c7f-2fwgz:~# clickhouse-backup print-config
general:
    remote_storage: azblob
    max_file_size: 0
    backups_to_keep_local: 7
    backups_to_keep_remote: 0
    log_level: info
    allow_empty_backups: false
    download_concurrency: 1
    upload_concurrency: 1
    upload_max_bytes_per_second: 0
    download_max_bytes_per_second: 0
    object_disk_server_side_copy_concurrency: 32
    allow_object_disk_streaming: false
    use_resumable_state: true
    restore_schema_on_cluster: ""
    upload_by_part: true
    download_by_part: true
    restore_database_mapping: {}
    restore_table_mapping: {}
    retries_on_failure: 3
    retries_pause: 5s
    watch_interval: 1h
    full_interval: 24h
    watch_backup_name_template: shard{shard}-{type}-{time:20060102150405}
    sharded_operation_mode: ""
    cpu_nice_priority: 15
    io_nice_priority: idle
    rbac_backup_always: true
    rbac_conflict_resolution: recreate
    retriesduration: 5s
    watchduration: 1h0m0s
    fullduration: 24h0m0s
clickhouse:
    username: default
    password: ""
    host: localhost
    port: 9000
    disk_mapping: {}
    skip_tables:
        - system.*
        - INFORMATION_SCHEMA.*
        - information_schema.*
        - _temporary_and_external_tables.*
    skip_table_engines: []
    timeout: 30m
    freeze_by_part: false
    freeze_by_part_where: ""
    use_embedded_backup_restore: false
    embedded_backup_disk: ""
    backup_mutations: true
    restore_as_attach: false
    check_parts_columns: true
    secure: false
    skip_verify: false
    sync_replicated_tables: false
    log_sql_queries: true
    config_dir: /etc/clickhouse-server/
    restart_command: exec:systemctl restart clickhouse-server
    ignore_not_exists_error_during_freeze: true
    check_replicas_before_attach: true
    default_replica_path: /clickhouse/tables/{cluster}/{shard}/{database}/{table}
    default_replica_name: '{replica}'
    tls_key: ""
    tls_cert: ""
    tls_ca: ""
    max_connections: 1
    debug: false
s3:
    access_key: ""
    secret_key: ""
    bucket: ""
    endpoint: ""
    region: us-east-1
    acl: private
    assume_role_arn: ""
    force_path_style: false
    path: ""
    object_disk_path: ""
    disable_ssl: false
    compression_level: 1
    compression_format: tar
    sse: ""
    sse_kms_key_id: ""
    sse_customer_algorithm: ""
    sse_customer_key: ""
    sse_customer_key_md5: ""
    sse_kms_encryption_context: ""
    disable_cert_verification: false
    use_custom_storage_class: false
    storage_class: STANDARD
    custom_storage_class_map: {}
    concurrency: 2
    part_size: 0
    max_parts_count: 4000
    allow_multipart_download: false
    object_labels: {}
    request_payer: ""
    check_sum_algorithm: ""
    debug: false
gcs:
    credentials_file: ""
    credentials_json: ""
    credentials_json_encoded: ""
    embedded_access_key: ""
    embedded_secret_key: ""
    skip_credentials: false
    bucket: ""
    path: ""
    object_disk_path: ""
    compression_level: 1
    compression_format: tar
    debug: false
    force_http: false
    endpoint: ""
    storage_class: STANDARD
    object_labels: {}
    custom_storage_class_map: {}
    client_pool_size: 32
    chunk_size: 0
cos:
    url: ""
    timeout: 2m
    secret_id: ""
    secret_key: ""
    path: ""
    object_disk_path: ""
    compression_format: tar
    compression_level: 1
    debug: false
api:
    listen: localhost:7171
    enable_metrics: true
    enable_pprof: false
    username: ""
    password: ""
    secure: false
    certificate_file: ""
    private_key_file: ""
    ca_cert_file: ""
    ca_key_file: ""
    create_integration_tables: false
    integration_tables_host: ""
    allow_parallel: false
    complete_resumable_after_restart: true
    watch_is_main_process: false
ftp:
    address: ""
    timeout: 2m
    username: ""
    password: ""
    tls: false
    skip_tls_verify: false
    path: ""
    object_disk_path: ""
    compression_format: tar
    compression_level: 1
    concurrency: 3
    debug: false
sftp:
    address: ""
    port: 22
    username: ""
    password: ""
    key: ""
    path: ""
    object_disk_path: ""
    compression_format: tar
    compression_level: 1
    concurrency: 3
    debug: false
azblob:
    endpoint_schema: https
    endpoint_suffix: core.windows.net
    account_name: REDACTED
    account_key: ""
    sas: sp=racwdl&st=2024-12-09T14:33:25Z&se=2027-01-01T22:33:25Z&spr=https&sv=2022-11-02&sr=c&sig=REDACTED
    use_managed_identity: false
    container: clickhouse
    path: ""
    object_disk_path: ""
    compression_level: 1
    compression_format: tar
    sse_key: ""
    buffer_size: 0
    buffer_count: 3
    max_parts_count: 256
    timeout: 4h
    debug: false
custom:
    upload_command: ""
    download_command: ""
    list_command: ""
    delete_command: ""
    command_timeout: 4h
    commandtimeoutduration: 4h0m0s

@Slach
Copy link
Collaborator

Slach commented Dec 10, 2024

i tried to reproduce https://replit.com/@Slach/AzblobUrlParse?v=1
according to source code sas not transformed from config value and not transformed inside SDK ... error looks weird

could you try without sas

azblob:
    account_name: REDACTED
    account_key: "your account key"
    sas: ""

@jonaski
Copy link
Author

jonaski commented Dec 10, 2024

I don't have an account key, I need to use a sas token

@Slach Slach self-assigned this Dec 10, 2024
@Slach Slach added this to the 2.6.5 milestone Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants