-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need pvc namespace passed to CSI driver #170
Comments
The request is to:
|
Did #69 not address using secrets in other namespaces? (documented at https://kubernetes-csi.github.io/docs/secrets-and-credentials.html) external-provisioner/pkg/controller/controller.go Lines 481 to 499 in 3827f80
|
The reason the secrets in the pvc namespace are not made available for paired create/delete operations is that the PVC and its namespace may not exist at deletion time |
@liggitt we only need it in create. Doesn't it must exist in create, right? or am I wrong? |
Here's are some use cases that we, at Portworx, have seen over the years.
|
@saad-ali I've added some use cases for this request ^^ |
Also, we(catalyst cloud) have some similar requirements not sure should merge with this one. Besides the PV name/namespace, we also need the reclaim policy of the PV passed to the CSI driver(which in turn passed to the volume property in the storage backend), so as public cloud provider, we could know if we should delete the volumes in the backend when the cloud user deletes the k8s cluster. So, could we change the issue title to something like |
For reclaim policy, to deal with the situation that the policy could be changed anytime during a PV's lifetime, I think we need the external-provisioner to call something like 'updateVolume' method of CSI driver, which is not in the CSI spec yet. |
Potentially related: #213 |
Ok there are two asks in this issue
|
@saad-ali thanks for responding |
So you record some metadata when a volume is provisioned to say it belongs to
This is a great use case. But making a CSI driver reach in to Kubernetes to figure this out on its own is a hack. Kubernetes and CSI already support topology where a volume is only accessible by certain nodes in a cluster. However, Kubernetes and CSI don't provide a way to the case where a volume is equally accessible by all nodes but has some internal storage system topology that can influence application performance. Rather then poking a hole in the API to make the hack easier, I would strongly suggest working with the community to come up with a generic way to be able to influence storage specific topology. A good place to start is the long standing CSI issue already opened for this: container-storage-interface/spec#44.
Annotations on PVCs MUST NOT be passed to CSI drivers. The Kubernetes PVC object is intended for application portability. When we start leaking cluster/implementation specific details in to it we are violating that principle. And explicitly passing PVC annotations to CSI drivers encourages that pattern. Let's discuss the specific use cases you have in mind, and see if we can come up with better solutions for each of those uses cases (for example, the use case you pointed out above) rather then opening up a hole in the API.
The Kubernetes cluster does leak resources on deletion today. That is a problem, but fixing it at the storage system layer is a hack. Cleaning up cluster resources (and ensuring PVC reclaim policy) on cluster deletion is the responsibility of Kubernetes or the Kubernetes deployment system. Please open an issue on https://github.com/kubernetes/kubernetes/issues to do the right thing at those layers. |
Hi @saad-ali, thanks for your reply. I'm confused about something.
Usually, the cloud system relies on the resource metadata/decription/tags to identify which resources are belonged to the kubernetes cluster, and those metadata/description/tags are set when the kubernetes cluster resource is created. I have some examples:
|
We have a usecase for managing thousands of datasets that in turn can have arbitrary number of versions. |
@kerneltime for your use case, is your dataset readonly? Can users create a PVC based on some dataset, and modify, and persist it separately from another PVC based off of the same dataset? I'm trying to understand if csi ephemeral volumes can suit your use case better than PVs |
There 2 kinds read only and new datasets. If developers modify a dataset it is a new version. |
The in-tree Portworx Kubernetes driver currently relies on determining the namespace of the PVC during creation.
There are two issues with the current implementations in which none pass this information during creation of CSI volume:
1: On mounts the side car containers can pass the pod.Namespace (or more likely, the pvc name and namespace) to the CSI driver. We would like to have this on Creation.
and
2: Secrets Only passes secrets of the PVC namespace for every call except during creation.
We need
1:
to work and also would really to have2:
to be more efficient in obtaining secrets.The text was updated successfully, but these errors were encountered: