Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Deletion Secret Handling #330

Closed
saad-ali opened this issue Aug 14, 2019 · 8 comments · Fixed by #713
Closed

Improve Deletion Secret Handling #330

saad-ali opened this issue Aug 14, 2019 · 8 comments · Fixed by #713
Assignees
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@saad-ali
Copy link
Member

Today, the external-provisioner will look at StorageClass for deletion secret. If the secret does not exist, delete is called any way.

The problem is that the StorageClass may be deleted or mutated (deleted and recreated with different parameters). This will result in volumes being unable to delete. Ideally we want the deletion secret on the PV object. However, @liggitt pointed out that this would result in asymmetry of the API (provision is handled by a higher layer controller and specified in StorageClass, having that controller then look at PV object for the delete operation seems wrong). That said, we do want to better handle this case.

So as a compromise, the proposal is to 1) add a reference to the deletion secret as an annotation on the PV object (instead of a first class field), and 2) to better document why you shouldn't have deletion secrets.

For 1, the proposed change is to add a new flag to external-provisioner that says controller requires deletion secret, if this is set by SP, the external-provisioner should store a reference to the provision secret in an annotation on the PV object, and when deleting if the flag is set and the PV object has the annotation, fetch and pass the secret in the CSI DeleteSecret call.

And 2 is being tacked in kubernetes-csi/docs#189 (comment).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 12, 2019
@msau42
Copy link
Collaborator

msau42 commented Nov 13, 2019

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 13, 2019
@humblec
Copy link
Contributor

humblec commented Jun 13, 2020

/assign @humblec

@humblec
Copy link
Contributor

humblec commented Jun 13, 2020

Let me give a try to address this issue. :)

@mkimuram
Copy link
Contributor

@humblec

#654 will handle 1) of this issue.

@xing-yang
Copy link
Contributor

I thought both issues opened here are already addressed and this can be closed. See documentation here: https://kubernetes-csi.github.io/docs/secrets-and-credentials.html#csi-operation-secrets

@msau42 Is there anything pending for this?

@Madhu-1
Copy link
Contributor

Madhu-1 commented Mar 9, 2022

@xing-yang any update on this one? we still have an issue with storageclass deletion before deleting the PVC

@humblec
Copy link
Contributor

humblec commented Mar 9, 2022

we are half way through and its not completely done. Had some discussions on this in between ( wrt transfer of objects and its sideafffects) . As we have confirmed that we can proceed on this, the pending fixes are getting worked on, I am on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants