-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement dynamic provision support #6
Comments
Is this issue supposed to cover automatically creating new EFS file systems based off of PVC requests? In that case can we cover the ability to set various characteristics of that file system based off of the PVC (probably via annotations)? Such as the ability to configure performance mode, throughput mode, life cycle, encryption etc. |
Yes
Storageclass is the mechanism to pass filesystem configurations. See EBS's storageclass as an example. Do you have to use PVC annotation for this? |
No, storageClass should be sufficient for us. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
We'll also need to call CreateMountTarget: https://docs.aws.amazon.com/efs/latest/ug/API_CreateMountTarget.html . This will be trickier because the driver needs to know the VPC kubernetes nodes are in. It also needs to pick a subnet to add the mount target in. Optionally, it takes security group & IP address args. The security group must allow inbound NFS traffic, maybe the driver should be responsible for creating it too? The driver could preemptively create mount targets in all zones and create a PV that points to the mount target's DNS name so that it's mountable from any zone. Alternatively, the driver could take advantage of https://kubernetes.io/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/ to only create one mount target only for the zone a pod is scheduled to. |
punt this feature to after initial beta release |
The number of endpoints available is limited by the number of IPv4 addresses available in the vpc subnet. We really don't need a per endpoint per request we need a pattern that utilizes a folder structure within a single EFS instance |
Related to #63 |
@rfaircloth-splunk @ReillyProcentive @wreed4 I updated the issue description with two options to tackle this problem. PTAL |
I think Option 1 is the best. I think the best approach would be if This would allow for users who wish to have PVC volumes all use the same EFS volume (similar to how the external storage EFS provisioner works today) to do so while also allowing for creating a single EFS volume per PVC. |
Definitely Option 1. I think the more common pattern is PVCs are used for dynamic provisioning, not StorageClasses.
If the cluster admin creates two StorageClasses that allows PVC's to overlap each other, or conflict, that is their look-out 😄 |
@whereisaaron If understand correctly, are you suggesting modeling EFS dynamic provisioning as creating volume from existing EFS filesystem? That's what being proposed as option 2. And option 1 is saying modeling dynamic provisioning as creating a new EFS filesystem. |
I guess I misunderstood the option 2 then @leakingtapan. It appeared to be using the StorageClass as the mechanism of dynamic provisioning rather than the PV? Or did it mean to say a PV would be created for an existing StorageClass like the example? I think @ReillyProcentive is saying the same thing as me, and also said option 1, but I think that is because the examples are maybe confusing both of us. |
The way I read the options, neither were complete. Option one suggested the PVC creating a new filesystem for each PVC, but the storage class having the file system creation parameters. It suggested that the PVC use the storage class to create a new filesystem every time. Option two suggested the PVC creating sub-directories within the same filesystem, and the storage class not having file system creation parameters. For what it's worth, I would like to see basically what @whereisaaron is proposing. The storage class defines the file system, you'd have one file system per storage class. Then the PVC dynamically provisions volumes within that filesystem defined by the storage class. That way, the user has control over which volumes go to which file system and both use-cases are satisfied. Icing on the cake, for me, would be if the creation of the storageclass ALSO triggered a dynamic provisioning event, and created an EFS to match. It'd be a really handy feature to be able to entirely control your storage through the CSI driver and not have to also learn the AWS Service Operator. That way would require creating the EFS resource, then the storage class to match it instead of just the storage class. In this case, the storage class would have to support both dynamic and static provisioning (if the filesystem already exists (or an ID was provided), do nothing new). |
FWIW the old non-CSI EFS provisioner (https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs) works like Option 2, where the EFS file system, mount points & network security groups are all sorted out by "something else" first. In their case the necessary The provisioner then dynamically makes a sub-folder (normally with a unique group id) for each PVC that comes asking to keep them separate. This seems nice & easy & so preferred to me, but I haven't used it much yet or thought hard about backups, security, etc. If you do create file-systems (option 1), you will need to allow for controlling subnets & security groups for the mount points, & tagging also. Maybe option 2 is the goal for now & option 1 a later extension only if we can't get what we need (soon enough) with AWS Service Operator? |
Yep. That's my intention as well. |
Is there any sort of timeline for when this feature will be available? We're having to do all kinds of workarounds because the csi driver does not dynamically provision volumes. |
EFS recently added support for Access Point: Maybe another option is to dynamic provision PVs by creating access points in an EFS filesystem (instead of creating subdirectories)? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Can we use aws-efs-csi-driver in conjection with efs-provisioner for dynamic provisioning EFS shares. User Create PVC --> EFS-Provisioner Creates new PV --> All PVs pointing to the same EFS resource but with different shares. |
We implemented this with a custom CSI driver that dynamically creates PVs in response to PVCs. The driver is only responsible for creating and binding the PVs; the actual mount is still performed by the AWS EFS CSI driver. Here you go: https://github.com/LogMeIn/aws-efs-csi-pv-provisioner I'd be happy to assist in adding this functionality directly into this driver. |
That would be great, we are trying to build a multi-tenant cluster with EFS as the main statefull storage solution and without dynamic provisioning, it will be difficult to implement. |
@devkid I used your repo code, but Lil bit confused I followed the steps till 6 As per your Read me. but not able to understand I need to deploy a PVC that is available under the deployment folder or the one which you pasted over readme. Also some file you use storage class as efs and some efs-sc. |
@gaurav-ackotech The PV and PVC in the The PVC in the Readme is an example for what you may want to use for your application; it has the |
For visibility/cross-linking, aws-controllers-k8s/community#328 is the issue for EFS provisioning via the AWS Controllers for Kubernetes project. Once that is available, it would close the loop in terms of "doing it all in k8s". It seems that per Option 2, the December 2019 discussion, and https://github.com/LogMeIn/aws-efs-csi-pv-provisioner, that the expectation is still that the administrator creates the CRD resource for ACK, and then extracts the fs-id from the object's status field and creates a StorageClass that points to it, for dynamic PVs to be created as subdirectories underneath. So work on this would not be blocked by lack of aws-controllers-k8s/community#328, as Option 2 works if the administrator provisions EFS via the AWS Console and still just gets the right fs-id. It might be feasible to implement a controller that auto-creates Storage Classes from EFS provisioning CRDs (or auto-creates EFS provisioning CRDs from Storage Classes) but I don't think that needs to be part of the EFS CSI Driver itself. It might make sense for ACK to do that, for example. |
…visioning Bug 1999578: UPSTREAM: 548: Fix block provisioning
Option 1
User create PVC, its creation will trigger the driver to create a NEW EFS filesystem. The storageclass will be defined as follows:
StorageClass
Open question
Option 2 (Preferred)
User creates PVC, its creation will trigger the driver to provision a subdirectory within an existing EFS filesystem as the volume. The storageclass will be defined as follows:
Provisioing EFS filesystem
Since we model the storageclass as the configuration for creating new directories inside existing EFS filesystem, user will need a different way to provision EFS system. This can be achieved by create a new service operator for EFS using service operator framework. User will need to create a custom resource that defines a EFS filesystem, and the operator will create the EFS filesystem by watching the custom resource.
For various configurations supported by EFS, eg performanceMode, throughputMode, kmsKeyId, etc, they will be a field within the EFS CRD.
Ref:
The text was updated successfully, but these errors were encountered: