Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FileSystem per PVC - option #1453

Open
dbones opened this issue Sep 16, 2024 · 5 comments
Open

FileSystem per PVC - option #1453

dbones opened this issue Sep 16, 2024 · 5 comments

Comments

@dbones
Copy link

dbones commented Sep 16, 2024

Is your feature request related to a problem?/Why is this needed
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

I find having one FileSystemId per storage class is limiting

all the PVC's which are created have different reasons for life, with different backup policies, and different delete policies;

if we had to role back to a certain backup for one of the PVC's we would need to figure out how without impacting all other PVC's on the same FileSystem (even though they are in different PVCs, will be in the same filesystem, of which a backup is applied)

/feature

Describe the solution you'd like in detail
A clear and concise description of what you want to happen.

a way to provide us to configure the FileSystem in the PVC, not storage class.

Storage class, when there is no FileSystemId provided

possible way:

provide a FileSystem Resource, allowing us to create a FileSystem in yaml

and the allowing us to reference the managed resource in a PVC

Another possible way:

PVC, will have an option to include the FileSystemId (annotation?)

  • if provided, it creates an access point to the existing FileSystem
  • if not provided, it create the filesystem, and access point <- default

the PVC will allow us to spec the folder/volume inside, default is /

it would be nice to be able to specify a back up policy as well against each PVC

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

  • Crossplane (or an custom operator) to setup a storage class for every PVC (1 to 1)

Additional context
Add any other context or screenshots about the feature request here.

@avanish23
Copy link
Contributor

Hi,
This is indeed interesting.
Not sure how we can handle a Storage Class(SC) with no EFS Id. The reason being SC is immutable. If the driver creates the EFS there will be no way to modify this SC to have a parameter of FileSystemId added. This might need some more brainstorming.
The other idea of being able to pass the FileSystemId can be done where the FileSystemId of the PVC takes precedence over the FileSystemId of the SC. So we can start users having to pass FileSystemId in both SC and PVC where PVC will override it, but if it is not provided in the PVC we must use the FileSystemId in SC.

@avanish23
Copy link
Contributor

Hey, we could be following up on the #1052 issue for the ask of creating EFS on the fly.

@dbones
Copy link
Author

dbones commented Oct 31, 2024

The other issue is interesting, I think we may be trying a very similar issue, im not sure what is meant by dynamic

the main point for us, is that we require to set the FileSystemId on the PVC (we will know what it is) in the PVC spec.

The main flow we have

pre-req, EKS cluster up and running, with an SC setup for EFS (with no FileSystemID)

  1. EFS Operator (crossplane) - creates the EFS Filesystem, which will then provide us the FileSystemID
  2. Create a PVC which we specify the FileSystemId
  3. attach volume to pods

all ids should be known (if the EFS was created by TF, you could still create a PVC with a known FileSystemID)

@dbones
Copy link
Author

dbones commented Oct 31, 2024

or if we create a PVC, it will just handle all the EFS filesystem, 1 per PVC as an option?

@avanish23
Copy link
Contributor

When I first went through this issue, I did some research on this.
I see a few problems here though.

  • When we create a request, the external provisioner sends little information about the PVC to the CSI driver. To be precise today accessmode, access type, and storage requests are passed on.
    I can create an issue with them to find out why there is so little PVC information passed on and whether we can have more information passed on to the drivers. This might require changes in the CSI-spec itself and I am scared that might not be very easy to do.
  • Another concern I see is the deletion of the EFS that gets provisioned by the driver. Ideally, the driver should also take care of the deletion. But the question is when do we delete the EFS?
    Why do I feel it needs to be thought of because this EFS deletion request must come from the external-csi, and, external-csi sends delete requests when we wish to delete the PV in our case an access point. Deleting the EFS when there are no access points left in the EFS, i.e., along with the deletion of the last access point in the EFS.
    I was also thinking whether we could delete the EFS once the storage class is deleted, but that might start to complicate things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants