Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Resource quota bypass #1163

Closed
tnrms007 opened this issue Feb 27, 2024 · 6 comments
Closed

[BUG]: Resource quota bypass #1163

tnrms007 opened this issue Feb 27, 2024 · 6 comments
Labels
area/csi-powerflex Issue pertains to the CSI Driver for Dell EMC PowerFlex type/bug Something isn't working. This is the default label associated with a bug issue.
Milestone

Comments

@tnrms007
Copy link

tnrms007 commented Feb 27, 2024

Bug Description

If you set 1Gi in resources.storage when creating pvc, 8Gi (multiple of 8) is actually created, but the resource quota is set to 1Gi. Because of this problem, users may use more resources. ex) Even if a 10Gi limit is set, 80Gi can be used.

  • If you do not specify a multiple of 8 when creating pvc, you can bypass the resource quota.

Logs

kubectl get pvc test
test ~ 8GI

kubectl get resourcequota
test 1Gi/50Gi

Screenshots

No response

Additional Environment Information

No response

Steps to Reproduce

Allocate pvc as 1Gi and check resourcequota.

Expected Behavior

Resourcequota should be searched with the value assigned by the csi driver.

CSM Driver(s)

csi powerflex 2.2 ~ latest

Installation Type

No response

Container Storage Modules Enabled

No response

Container Orchestrator

kubernetes 1.22 ~ 1.26

Operating System

RHEL 8 / Cent 7.9

@tnrms007 tnrms007 added needs-triage Issue requires triage. type/bug Something isn't working. This is the default label associated with a bug issue. labels Feb 27, 2024
@csmbot
Copy link
Collaborator

csmbot commented Feb 27, 2024

@tnrms007: Thank you for submitting this issue!

The issue is currently awaiting triage. Please make sure you have given us as much context as possible.

If the maintainers determine this is a relevant issue, they will remove the needs-triage label and respond appropriately.


We want your feedback! If you have any questions or suggestions regarding our contributing process/workflow, please reach out to us at container.storage.modules@dell.com.

@hoppea2
Copy link
Collaborator

hoppea2 commented Mar 6, 2024

/sync

@hoppea2 hoppea2 removed the needs-triage Issue requires triage. label Mar 6, 2024
@csmbot
Copy link
Collaborator

csmbot commented Mar 6, 2024

link: 21589

@AkshaySainiDell AkshaySainiDell added the area/csi-powerflex Issue pertains to the CSI Driver for Dell EMC PowerFlex label Mar 7, 2024
@sharmilarama sharmilarama added this to the v1.10.0 milestone Mar 11, 2024
@jooseppi-luna
Copy link
Contributor

@tnrms007 we are looking into this -- can you provide the kubectl describe of the storageclass and resource quota you used as well as either a copy-paste or screenshot of the logs you have from creating the pvc and checking the size of the pvc and resource quota?

@tnrms007
Copy link
Author

tnrms007 commented Mar 12, 2024

hi here, logs and test information.


OS

Red Hat Enterprise Linux release 8.6 (Ootpa)
CentOS Linux release 7.9.2009 (Core)

k8s (Higher versions are not tested)

v1.20.5
v1.21.12
v1.22.10
v1.24.14
v1.25.10
v1.26.05

2. csi-powerflex version

The same phenomenon occurs in all versions of csi powerflex (2.2.0 ~ 2.8.0)

csi-attacher:v4.2.0
csi-provisioner:v3.4.0
csi-external-health-monitor-controller:v0.8.0
csi-resizer:v1.7.0
csi-vxflexos:v2.6.0


StorageClass

$ kubectl get sc powerflexos-xfs-sc-300t -o yaml

allowVolumeExpansion: true
allowedTopologies:

  • matchLabelExpressions:
    • key: csi-vxflexos.dellemc.com/abcdtest
      values:
      • csi-vxflexos.dellemc.com
        apiVersion: storage.k8s.io/v1
        kind: StorageClass
        metadata:
        labels:
        argocd.argoproj.io/instance: dev-upgrade-test-06.csi-vxflexos-child
        name: powerflexos-xfs-sc-300t
        mountOptions:
  • discard
    parameters:
    FsType: xfs
    storagepool: SP999
    systemID: abcdtest
    provisioner: csi-vxflexos.dellemc.com
    reclaimPolicy: Delete
    volumeBindingMode: Immediate

StorageQuota

$ kubectl get resourcequotas ## Before creating pvc
NAME AGE REQUEST LIMIT
storage-quota-test 5d20h requests.storage: 0/30Gi

$ kubectl get sc ## Storageclass with different VOLUMEBINDINGMODE
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
powerflexos-xfs-sc-300t csi-vxflexos.dellemc.com Delete Immediate true 158m
powerflexos-xfs-sc-300t-nonexpension csi-vxflexos.dellemc.com Delete Immediate false 4m59s

$ kubectl get pvc ## pvc created with csi-powerflex
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
powerflex-pvc-test-expansion Bound k8s-8d3d17b014 8Gi RWO powerflexos-xfs-sc-300t 8s
powerflex-pvc-test-nonexpension Bound k8s-995dd3ae11 8Gi RWO powerflexos-xfs-sc-300t-nonexpension 4s

$ kubectl get resourcequotas ## The requested volume size and allocated volume size are different.
NAME AGE REQUEST LIMIT
storage-quota-test 5d20h requests.storage: 2Gi/30Gi

@jooseppi-luna
Copy link
Contributor

jooseppi-luna commented Mar 12, 2024

@tnrms007 Thanks so much for the logs and for testing/finding this discrepancy. As an initial update, what is happening is that our driver is taking the CreateVolume request from K8s and upsizing the volume size to the next multiple of 8 (because the PowerFlex array only allocates storage in 8 Gi chunks, this is out of our control). However, Kubernetes appears to base the requests.storage field in the resourcequota off of the request, not the actual PV/PVC we create as a result of the request. We are exploring how we can address this and will keep you posted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/csi-powerflex Issue pertains to the CSI Driver for Dell EMC PowerFlex type/bug Something isn't working. This is the default label associated with a bug issue.
Projects
None yet
Development

No branches or pull requests

7 participants