Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated way to transfer old volumes to new provisioner #1287

Closed
czomo opened this issue Jun 23, 2022 · 9 comments
Closed

Automated way to transfer old volumes to new provisioner #1287

czomo opened this issue Jun 23, 2022 · 9 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@czomo
Copy link

czomo commented Jun 23, 2022

Is your feature request related to a problem? Please describe.
Switching to aws-ebs-csi driver is great however we are left alone with hundreds of volumes that are handled by kubernetes.io/aws-ebs(old) provisioner. CSI redirects all plugin operations from the existing in-tree plugin to the ebs.csi.aws.com however this is useless when we can't use the benefits of enabling CSI like gp3 or snapshots.

Describe the solution you'd like in detail
Provide automated or semi-automated way to transfer old volumes to new csi format. In the way we could use all the benefits of switching to ebs-csi-driver

Describe alternatives you've considered
Right now we could upgrade volume type to gp3 in aws console and accept the drift between kubernetes and actual state of volumes OR use manual workaround https://aws.amazon.com/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/ (very time consuming).

@czomo
Copy link
Author

czomo commented Jun 23, 2022

/kind feature

@ConnorJC3
Copy link
Contributor

@czomo it's not exactly convenient, but I believe you should be able to manually create a new PV (and if necessary, PVC) that explicitly specifies the old volume by using the same method as static provisioning

@czomo
Copy link
Author

czomo commented Jun 24, 2022

@czomo it's not exactly convenient, but I believe you should be able to manually create a new PV (and if necessary, PVC) that explicitly specifies the old volume by using the same method as static provisioning

interesting, we could
0. Update gp2 to gp3 in aws console(question: can we do it in different order?)

  1. patch existing pv retain policy
  2. get pv/pvc definition | kubectl neat
  3. sed old values with new ones
  4. delete old definition of pv/pvc
  5. kubectl apply -f new-pv/pvc.yaml

I am a little worried about downtime but few seconds should be okish.
Any other ideas, improvements?

@ConnorJC3
Copy link
Contributor

Yeah, that's the basic idea. Currently, the external driver doesn't reconcile the volume type of already-created volumes at all, so you could do step 0 at any point during the process.

Unfortunately, I think some small downtime will be necessary (and you already have it down to the minimal amount) unless Kubernetes itself adds a migration feature, because existing volumes cannot have their StorageClass and/or provisioner updated (those fields are immutable), so you will always end up having to recreate the PV/PVC to migrate.

@czomo
Copy link
Author

czomo commented Jul 11, 2022

@torredil @gtxu pvmigrate from replicatedhq have sth that can help ppl to transfer multiple pv at once. It needs some enchantments of course. wdyt?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 9, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 8, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 8, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants