Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Epic] Backup Replication #103

Open
1 task
jrnt30 opened this issue Sep 25, 2017 · 25 comments
Open
1 task

[Epic] Backup Replication #103

jrnt30 opened this issue Sep 25, 2017 · 25 comments
Assignees
Labels
Breaking change Impacts backwards compatibility Enhancement/User End-User Enhancement to Velero Epic Icebox We see the value, but it is not slated for the next couple releases. kind/requirement Reviewed Q2 2021

Comments

@jrnt30
Copy link
Contributor

jrnt30 commented Sep 25, 2017

User Stories
As a cluster administrator, I would like to define a replication policy for my backups which will ensure that copies exist in other availability zones or regions. This will allow me to restore a cluster in case of an AZ or region failure.

Non-Goals

  1. Cross-cloud replication of backups
  2. Cross-account replication of backups

Features

  • ?

Original Issue Description

There are a few different dimensions of a DR strategy that may be worth consideration. For AWS deployments the trade-offs the complexity of running Multi-AZ are fairly negligible if you stay in the same region. As such the Single Region/Multi-AZ deployment is extremely common.

An additional requirement often is having the ability to restore in another region with more relaxed RTO/RPO in the case of an entire region going down.

Looking over #101 brought a few things to mind, and a large wish list might include:

  • Ability to specify additional block storage providers for syncing to additional regions (or a different type of block storage provider that would simply execute the clone to a different region)
  • Ability to map AZs for a restoration (maybe similar to Namespaces but preferably just transparently for the user) to allow for something like us-east-1a -> us-west-2b.
  • Writing backup data to an additional bucket in alternate region

Some of these are certainly available today to users (copying snapshots and s3 data) but require additional external integrations to function properly. As a user it would be more convenient if this were able to be done in a consolidated way.

@ncdc
Copy link
Contributor

ncdc commented Sep 25, 2017

@jbeda some of what @jrnt30 is describing sounds similar to your idea of "backup targets"

@jimzim
Copy link

jimzim commented Nov 13, 2017

I just was going to post this as a feature request. :)

I just tried to do this from eastus to westus in Azure and started to think about how we could copy the snapshot and create the disk in the correct region. We could possibly have a restore target config? I also like the idea of creating multiple backups to other regions in case a region goes down or a cluster and its resources get deleted.

@ncdc
Copy link
Contributor

ncdc commented Nov 13, 2017

@jimzim this is definitely something we need to spec out and do! We've been kicking around the idea of a "backup target", which would replace the current Config kind. You could define as many targets as you wish, and when you perform a backup, you would then specify which target to use. There are some UX issues to reason through here...

@jimzim
Copy link

jimzim commented Nov 29, 2017

@ncdc Maybe we can discuss this briefly at KubeCon? I have begun to make this work on Azure, but before I go too much further it would be good to talk about what your planned architecture is.

@ncdc
Copy link
Contributor

ncdc commented Nov 29, 2017 via email

@jbeda
Copy link

jbeda commented Dec 1, 2017

This is very much what i'm thinking. We need to think about backup targets, restore sources and ways to munge stuff with a pipeline. Sounds like we are all thinking similar things.

@rocketraman
Copy link

On Azure, you can create a snapshot into a different resource group than the one that the persistent disk is on, which means the snapshots could be created directly into the AZURE_BACKUP_RESOURCE_GROUP instead of AZURE_RESOURCE_GROUP.

Then, cross-RG restores should be quite simple as the source of the data will always be consistent and there should be no refs to AZURE_RESOURCE_GROUP.

I'm not sure if same-Location is a limitation of this -- I've only tried this on two resource groups that are in the same Azure Location.

The command/output I used to test this:

az snapshot create --name foo --resource-group Ark_Dev-Kube --source '/subscriptions/xxx/resourceGroups/my-Dev-Kube1/providers/Microsoft.Compute/disks/devkube1-dynamic-pvc-0bbf7e11-9e82-11e7-a717-000d3af4357e'
  DiskSizeGb  Location    Name    ProvisioningState    ResourceGroup    TimeCreated
------------  ----------  ------  -------------------  ---------------  --------------------------------
           5  canadaeast  foo     Succeeded            Ark_Dev-Kube     2018-01-09T16:21:58.398476+00:00

and the foo snapshot was created in Ark_Dev-Kube even though the disk is in my-Dev-Kube1.

@ncdc ncdc self-assigned this Mar 9, 2018
@ncdc ncdc added this to the v0.8.0 milestone Mar 9, 2018
@ncdc ncdc modified the milestones: v0.8.0, v0.9.0 Apr 24, 2018
@ncdc ncdc removed this from the v0.9.0 milestone Jun 8, 2018
@rosskukulinski rosskukulinski added the Enhancement/User End-User Enhancement to Velero label Jun 24, 2018
@rosskukulinski rosskukulinski added this to the v1.0.0 milestone Jun 24, 2018
@rosskukulinski
Copy link
Contributor

For reference, this is the current Ark Backup Replication design.

@nrb
Copy link
Contributor

nrb commented Jul 10, 2018

We've created a document of scenarios that we'll use to inform the design decisions for this project.

We also have a document where we're discussing more detailed changes to the Ark codebase from which we'll generate a list of specific work items.

Members of the heptio-ark@googlegroups.com google group have comment access to both of these documents for anyone who would like to share their thoughts on these.

@nrb nrb modified the milestones: v1.0.0, v0.10.0 Jul 18, 2018
@rosskukulinski rosskukulinski added Epic and removed Epic labels Jul 18, 2018
@dijitali
Copy link

dijitali commented Jun 3, 2019

Similar scenario for us, I think, and we are using the following manual workaround:

# Make a backup on the first cluster
kubectx my-first-cluster
velero backup create my-backup

# Switch to new cluster and restore the backup
kubectx my-second-cluster
velero restore create --from-backup my-backup

# Find the restored disk name
gcloud config configurations activate my-second-project
gcloud compute disks list

# Move the disk to the necessary region
gcloud compute disks move restore-xyz --destination-zone $my-second-cluster-zone

# Ensure the PV is set to use the retain reclaim policy then delete the old resources
kubectl patch pv mongo-volume-mongodb-0 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
kubectl delete statefulset mongodb
kubectl delete pvc mongo-volume-mongodb-0

# Recreate the restored stateful set with references for the new volume
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: "mongodb"
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-volume
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
      storageClassName: ""
      volumeName: "mongo-volume-mongodb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-volume-mongodb-0
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: "restore-xyz"
    fsType: ext4

EOF

@jujugrrr
Copy link

jujugrrr commented Jun 4, 2020

Hi, is there any ETA for this? It sounds like a basic feature to be able to use backup to recover from an AZ failure.

https://docs.google.com/document/d/1vGz53OVAPynrgi5sF0xSfKKr32NogQP-xgXA1PB6xMc/edit#heading=h.yuq6zfblfpvs sounded promising

@skriss
Copy link
Contributor

skriss commented Jun 4, 2020

@jujugrrr we have cross-AZ/region backup & restore on our roadmap. If you're interested in contributing in any way (requirements, design work, etc), please let us know!

cc @stephbman

@kmadel
Copy link

kmadel commented Aug 10, 2020

You don't need backup replication to support multi-zone and multi-region for GCP/GKE with the K8s VolumeSnapshot beta support of Velero v1.4. See #1624 (comment)

@nrb nrb mentioned this issue Aug 12, 2020
@nrb nrb removed this from the v2.0 milestone Dec 8, 2020
@eleanor-millman eleanor-millman added the Icebox We see the value, but it is not slated for the next couple releases. label May 3, 2021
@fluffyf-x
Copy link

Hey, I was wondering if there was any update on this? Or a breakdown of tasks required to complete this epic?

My team is running an AKS cluster with the csi plugin, we've tried rustic as well as restoring VHD from blob to move the snapshots into another region which resulted in:

StatusCode: 409, RawError: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: {
  "error": {
    "code": "OperationNotAllowed",
    "message": "Addition of a blob based disk to VM with managed disks is not supported.",
    "target": "dataDisk"
  }
}

jmontleon pushed a commit to jmontleon/velero that referenced this issue Jul 7, 2021
@jkupidura14
Copy link

Is there any update to this? I feel like this could be easily solved by not storing a specific volume ID (snapshot id in the case of AWS) that you want to restore from, but to make a custom tag with a randomly generated ID that Velero uses as a reference when trying to restore. This would make it so that no matter what region or az you copy the storage backup to, Velero would still be able to restore from it when it has the correct ID tag. Just a thought.

@joostvdg
Copy link

Any update to this?

We are looking into helping customers replicate volume backups across Cloud Regions (e.g., AWS us-east-1 to us-west-1) with Velero. We did some AWS specific investigations but it was closed because you have something else lined up. Is this ticket the place where we can track this?

@johnroach
Copy link

Hi is there any updates in regards to this? Any way someone can help with this?

@jglick
Copy link

jglick commented Nov 10, 2021

My very limited understanding from comments by @dsu-igeek at the community meeting of 2021-11-02 is that this sort of feature is on hold pending #4077 and a rewrite of volume snapshotters to a new architecture based on Astrolabe, because while it is not particularly hard to implement replication in a particular plugin without a general framework, subtle timing issues (#2888) could lead to anomalous behaviors in certain applications which do not tolerate a simple copy of volumes.

@iamsamwood
Copy link

Hello, also wondering if there are any updates on this and wondering how I can help.

@eleanor-millman eleanor-millman added the 1.10-candidate The label used for 1.10 planning discussion. label May 25, 2022
@eleanor-millman eleanor-millman removed the 1.10-candidate The label used for 1.10 planning discussion. label Jun 2, 2022
@jcockroft64
Copy link

I too am wondering about an update. Was this accepted into 1.10?

@antonmatsiuk
Copy link

any updates on the topic?

@veerendra2
Copy link

veerendra2 commented Mar 2, 2024

Hello, Any updates on this? We are hoping get this feature soon.

Right now we are trying to implement this by copying the azure disk snapshots to other region with shell/python scripts and update velero output files(to make restore smooth in case).

I was also wondering, anyone tried using CSI Snapshot Data Movement to make backups available in cross region?

UPDATE 16.05.2024

alromeros pushed a commit to alromeros/velero that referenced this issue Oct 25, 2024
* Use CDI api

CDI API has smaller/simpler dependencies. This all we need to cooperate with
the kubevirt cluster.

Signed-off-by: Bartosz Rybacki <brybacki@redhat.com>

* Update code to use cdi-api

Signed-off-by: Bartosz Rybacki <brybacki@redhat.com>

* Go mod tidy & vendor

Signed-off-by: Bartosz Rybacki <brybacki@redhat.com>

Signed-off-by: Bartosz Rybacki <brybacki@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Breaking change Impacts backwards compatibility Enhancement/User End-User Enhancement to Velero Epic Icebox We see the value, but it is not slated for the next couple releases. kind/requirement Reviewed Q2 2021
Projects
None yet
Development

No branches or pull requests