-
Notifications
You must be signed in to change notification settings - Fork 800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
volume-attach-limit argument doesn't work in 1.20 #1174
Comments
@sultanovich: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind/support Keep checking the repository and I have verified that there is an example indicating that the
Any idea why this argument didn't work? |
Continue testing and I have been able to validate that the Editing the DaemonSet the configuration works correctly:
|
We're trying out reducing the One thing that we do have is pods with more than one volume. I wonder whether is a factor?
I'm using a JQ parse to report results, and am seeing many nodes with much higher counts of volumes in use.
For example, this
from containers:
- args:
- node
- --endpoint=$(CSI_ENDPOINT)
- --volume-attach-limit=17
- --logtostderr
- --v=2
env:
- name: CSI_ENDPOINT
value: unix:/csi/csi.sock
- name: CSI_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: {REDACTED}/k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.5.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
name: ebs-plugin
ports:
- containerPort: 9808
name: healthz
protocol: TCP
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 20m
memory: 64Mi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/kubelet
mountPropagation: Bidirectional
name: kubelet-dir
- mountPath: /csi
name: plugin-dir
- mountPath: /dev
name: device-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-qvcx8
readOnly: true |
@sultanovich @aglees Thanks for noting that the configuration must be applied in the DaemonSet |
Hi @torredil sorry for the delay we continue this issue on this thread after I created this issue. I just have a question for the |
@sultanovich based on the code, how Kubernetes generally works, and #1163 I doubt changing this setting will have any impact on already scheduled pods. |
I agree with @stevehipwell's assessment. The |
Excellent, thank you very much for the confirmation @stevehipwell / @torredil . I'm going to do some additional checking in the afternoon and then post if my findings are relevant to the issue so you can close it later if you wish. |
/triage support
What happened?
Since K8s keeps trying to add volumes when it reaches the limit allowed by the AWS instance, try using the
volume-attach-limit
argument to find a workaround while troubleshooting.How to reproduce it (as minimally and precisely as possible)?
It is possible to reproduce by setting the argument and trying to create more volumes than the maximum configured as seen in the following example.
Environment
Kubernetes Version:
CSI-EBS Driver Version:
Previous consultation without solution in the slack channel
https://kubernetes.slack.com/archives/C09NXKJKA/p1645218261237229
The text was updated successfully, but these errors were encountered: