Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local path provisioner disallowed from reading Pods logs #9834

Closed
zc-devs opened this issue Mar 29, 2024 · 2 comments
Closed

Local path provisioner disallowed from reading Pods logs #9834

zc-devs opened this issue Mar 29, 2024 · 2 comments
Assignees
Milestone

Comments

@zc-devs
Copy link
Contributor

zc-devs commented Mar 29, 2024

Environmental Info:
K3s Version: v1.29.3+k3s1
Local path provisioner: v0.0.26

Node(s) CPU architecture, OS, and Version:
Linux 5.14.0-362.24.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Mar 20 04:52:13 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:
3 servers, embedded etcd

Describe the bug:
If helper pod fails, then Local path provisioner cannot get logs of it.

Steps To Reproduce:
1-8. The same as in #9833.
9. Check logs of Local path provisioner. Get error:

pods \"helper-pod-delete-pvc-3f2233a9-795e-4ba0-a52f-e4bf335979a4\" is forbidden: User \"system:serviceaccount:kube-system:local-path-provisioner-service-account\" cannot get resource \"pods/log\" in API group \"\" in the namespace \"kube-system\""

local-path-provisioner.log

Expected behavior:
Logs of failed helper pod can be seen in Local path provisioner's logs.

Actual behavior:
Error in logs, no logs from child pod.

Workaround:
Add permissions to read Pod's logs:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [""]
    resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods/log"]
    verbs: ["get", "list", "watch"]
@brandond
Copy link
Member

brandond commented Mar 29, 2024

Since when does the local path provisioner want to view pod logs? I guess this was added in rancher/local-path-provisioner#324 but noone updated the RBAC over here.

This will need to be updated in

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]

@brandond brandond moved this from New to Accepted in K3s Development Mar 29, 2024
@brandond brandond added this to the v1.29.4+k3s1 milestone Mar 29, 2024
@brandond brandond moved this from Accepted to Next Up in K3s Development Mar 29, 2024
@brandond brandond moved this from Next Up to Peer Review in K3s Development Apr 11, 2024
@brandond brandond self-assigned this Apr 11, 2024
brandond pushed a commit that referenced this issue Apr 11, 2024
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
brandond pushed a commit to brandond/k3s that referenced this issue Apr 11, 2024
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
brandond pushed a commit to brandond/k3s that referenced this issue Apr 11, 2024
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
brandond pushed a commit that referenced this issue Apr 11, 2024
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
brandond pushed a commit that referenced this issue Apr 11, 2024
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
brandond pushed a commit that referenced this issue Apr 11, 2024
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
@brandond brandond moved this from Peer Review to To Test in K3s Development Apr 11, 2024
@VestigeJ
Copy link

##Environment Details
Reproduced using VERSION=v1.29.3+k3s1
Validated using COMMIT=81cd630f87ba3c0c720862af4cd02850303083a5

for what it's worth I was able to hit and reproduce this issue on the v1.28.8 branch #9833

Infrastructure

  • Cloud

Node(s) CPU architecture, OS, and version:

Linux 5.14.21-150500.53-default x86_64 GNU/Linux
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP5"

Cluster Configuration:

NAME               STATUS   ROLES                       AGE     VERSION
ip-3-2-1-1         Ready    control-plane,etcd,master   3h35m   v1.29.3+k3s-81cd630f

Config.yaml:

node-external-ip: 3.2.1.1
token: YOUR_TOKEN_HERE
write-kubeconfig-mode: 644
debug: true
profile: cis
protect-kernel-defaults: true
cluster-init: true
embedded-registry: true

Reproduction

$ curl https://get.k3s.io --output install-"k3s".sh
$ sudo chmod +x install-"k3s".sh
$ sudo groupadd --system etcd && sudo useradd -s /sbin/nologin --system -g etcd etcd
$ sudo modprobe ip_vs_rr
$ sudo modprobe ip_vs_wrr
$ sudo modprobe ip_vs_sh
$ sudo printf "on_oovm.panic_on_oom=0 \nvm.overcommit_memory=1 \nkernel.panic=10 \nkernel.panic_ps=1 \nkernel.panic_on_oops=1 \n" > ~/90-kubelet.conf
$ sudo cp 90-kubelet.conf /etc/sysctl.d/
$ sudo systemctl restart systemd-sysctl
$ COMMIT=81cd630f87ba3c0c720862af4cd02850303083a5
$ sudo INSTALL_K3S_COMMIT=$COMMIT INSTALL_K3S_EXEC=server ./install-k3s.sh
$ set_kubefig
$ vim pv-test.yaml
$ vim pod-test.yaml
$ k get deploy -n kube-system local-path-provisioner -o jsonpath='{$.spec.template.spec.containers[:1].image}'
$ k apply -f pvc-test.yaml
$ k apply -f pod-test.yaml
$ kgp -A -o wide
$ k delete -f pod-test.yaml -f pvc-test.yaml
$ kg pv -A
$ k logs pod/local-path-provisioner
$ k logs pod/local-path-provisioner-6c86858495-9lkr6 -n kube-system
$ k logs pod/local-path-provisioner-6c86858495-9lkr6 -n kube-system
$ kg clusterrole local-path-provisioner-role -o yaml

Results:

//both new COMMIT_IDs and existing release retain the same versions of local-path-provisioner
$ k get deploy -n kube-system local-path-provisioner -o jsonpath='{$.spec.template.spec.containers[:1].image}'

rancher/local-path-provisioner:v0.0.26

// existing release clusterrole resource permissions attention to missing resources: pod/logs

$ kg clusterrole local-path-provisioner-role -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: local-storage
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2024-04-15T18:42:20Z"
  labels:
    objectset.rio.cattle.io/hash: 183f35c65ffbc3064603f43f1580d8c68a2dabd4
  name: local-path-provisioner-role
  resourceVersion: "273"
  uid: 6c447fa9-505f-43f3-b3d7-fa289476146f
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - persistentvolumeclaims
  - configmaps
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - endpoints
  - persistentvolumes
  - pods
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - get
  - list
  - watch

// latest commit install now includes the pods/log resources to the clusterrole

$ kg clusterrole local-path-provisioner-role -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAAAYDAAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: local-storage
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2024-04-15T18:42:20Z"
  labels:
    objectset.rio.cattle.io/hash: 183f35c65ffbc3064603f43f1580d8c68a2dabd4
  name: local-path-provisioner-role
  resourceVersion: "278"
  uid: f8302ce3-6990-416b-9afa-b545f373707d
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - persistentvolumeclaims
  - configmaps
  - pods/log
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - endpoints
  - persistentvolumes
  - pods
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - get
  - list
  - watch

I did not hit the error during reproduction in the pod logs for what it's worth. But as the change is a permissions change on the clusterrole it's pretty straightforward to check if it has the right permissions from the kubectl api.

$ k logs pod/local-path-provisioner-6c86858495-9lkr6 -n kube-system

I0415 18:42:38.079959       1 controller.go:811] Starting provisioner controller rancher.io/local-path_local-path-provisioner-6c86858495-9lkr6_62958260-9704-4ca4-ab3a-6038ed1fef65!
I0415 18:42:38.180437       1 controller.go:860] Started provisioner controller rancher.io/local-path_local-path-provisioner-6c86858495-9lkr6_62958260-9704-4ca4-ab3a-6038ed1fef65!
I0415 21:31:29.264836       1 controller.go:1337] provision "default/test-pvc" class "local-path": started
time="2024-04-15T21:31:29Z" level=info msg="Creating volume pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 at ip-1-1-23:/var/lib/rancher/k3s/storage/pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7_default_test-pvc"
time="2024-04-15T21:31:29Z" level=info msg="create the helper pod helper-pod-create-pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 into kube-system"
I0415 21:31:29.268005       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc", UID:"1cfed247-e7e4-4da8-b7d7-ffcefe3288c7", APIVersion:"v1", ResourceVersion:"29078", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-pvc"
time="2024-04-15T21:31:32Z" level=info msg="Volume pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 has been created on ip-1-1-23:/var/lib/rancher/k3s/storage/pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7_default_test-pvc"
time="2024-04-15T21:31:32Z" level=info msg="Start of helper-pod-create-pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 logs"
time="2024-04-15T21:31:32Z" level=info msg="Illegal option -a"
time="2024-04-15T21:31:32Z" level=info msg="End of helper-pod-create-pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 logs"
I0415 21:31:32.343240       1 controller.go:1442] provision "default/test-pvc" class "local-path": volume "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7" provisioned
I0415 21:31:32.343275       1 controller.go:1455] provision "default/test-pvc" class "local-path": succeeded
I0415 21:31:32.343283       1 volume_store.go:212] Trying to save persistentvolume "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7"
I0415 21:31:32.349700       1 volume_store.go:219] persistentvolume "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7" saved
I0415 21:31:32.349918       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc", UID:"1cfed247-e7e4-4da8-b7d7-ffcefe3288c7", APIVersion:"v1", ResourceVersion:"29078", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7
I0415 21:34:46.530546       1 controller.go:1471] delete "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7": started
time="2024-04-15T21:34:46Z" level=info msg="Deleting volume pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 at ip-1-1-23:/var/lib/rancher/k3s/storage/pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7_default_test-pvc"
time="2024-04-15T21:34:46Z" level=info msg="create the helper pod helper-pod-delete-pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 into kube-system"
time="2024-04-15T21:34:48Z" level=info msg="Volume pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 has been deleted on ip-1-1-23:/var/lib/rancher/k3s/storage/pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7_default_test-pvc"
time="2024-04-15T21:34:48Z" level=info msg="Start of helper-pod-delete-pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 logs"
time="2024-04-15T21:34:48Z" level=info msg="Illegal option -a"
time="2024-04-15T21:34:48Z" level=info msg="End of helper-pod-delete-pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7 logs"
I0415 21:34:48.607227       1 controller.go:1486] delete "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7": volume deleted
I0415 21:34:48.611467       1 controller.go:1531] delete "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7": persistentvolume deleted
I0415 21:34:48.611485       1 controller.go:1536] delete "pvc-1cfed247-e7e4-4da8-b7d7-ffcefe3288c7": succeeded

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

No branches or pull requests

3 participants