-
Notifications
You must be signed in to change notification settings - Fork 788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated way to transfer old volumes to new provisioner #1287
Comments
/kind feature |
@czomo it's not exactly convenient, but I believe you should be able to manually create a new PV (and if necessary, PVC) that explicitly specifies the old volume by using the same method as static provisioning |
interesting, we could
I am a little worried about downtime but few seconds should be okish. |
Yeah, that's the basic idea. Currently, the external driver doesn't reconcile the volume type of already-created volumes at all, so you could do step 0 at any point during the process. Unfortunately, I think some small downtime will be necessary (and you already have it down to the minimal amount) unless Kubernetes itself adds a migration feature, because existing volumes cannot have their StorageClass and/or provisioner updated (those fields are immutable), so you will always end up having to recreate the PV/PVC to migrate. |
@torredil @gtxu pvmigrate from replicatedhq have sth that can help ppl to transfer multiple pv at once. It needs some enchantments of course. wdyt? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is your feature request related to a problem? Please describe.
Switching to aws-ebs-csi driver is great however we are left alone with hundreds of volumes that are handled by kubernetes.io/aws-ebs(old) provisioner. CSI redirects all plugin operations from the existing in-tree plugin to the ebs.csi.aws.com however this is useless when we can't use the benefits of enabling CSI like gp3 or snapshots.
Describe the solution you'd like in detail
Provide automated or semi-automated way to transfer
old
volumes to new csi format. In the way we could use all the benefits of switching to ebs-csi-driverDescribe alternatives you've considered
Right now we could upgrade volume type to gp3 in aws console and accept the drift between kubernetes and actual state of volumes OR use manual workaround https://aws.amazon.com/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/ (very time consuming).
The text was updated successfully, but these errors were encountered: