Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods not deleted after its Rancher App is deleted #2048

Closed
theAkito opened this issue Jul 22, 2020 · 41 comments
Closed

Pods not deleted after its Rancher App is deleted #2048

theAkito opened this issue Jul 22, 2020 · 41 comments
Assignees
Labels
kind/bug Something isn't working

Comments

@theAkito
Copy link

theAkito commented Jul 22, 2020

Environmental Info:
K3s Version:

k3s version v1.18.2+k3s1 (698e444a)

Node(s) CPU architecture, OS, and Version:

4.15.0-101-generic Ubuntu x86_64

Cluster Configuration:

1 master imported within Rancher.

Describe the bug:

Currently, I have a Rancher deployment with 2 clusters. 1 being a generic RKE cluster and the other one being a k3s one.

When deleting a Rancher App on the first one, all goes fine with deploying and deleting Rancher Apps.

However, when deploying a Rancher App and deleting it afterwards on the k3s cluster, the App's pods don't get deleted and still run under the radar, not detected by Rancher. Even worse, they obviously still suck up all the resources that I attempted to gain when deleting the App. So of course the cluster fills up over time and cannot accept new deployments, because all other ones are still running behind the scenes.

Original Bug Description


I just discovered that this issue is much worse than originally assumed.
To delete a pod from a deployment, you have to delete the deployment, or else the pod resurrects, automatically. However, in this situation there is no deployment to be deleted. The pods are still left. If you now try to delete the pods, they revive, again. So there is no obvious way to delete the pods, since you'd need to delete their deployments which are already gone, though.


For testing purposes, I set up a chart and deleted its deployments with

kubectl delete --all deployments --namespace=test1 --grace-period=0 --force

The pods remained and hat to be deleted with below workaround.

Steps To Reproduce:

  1. Use Rancher and import the k3s cluster.
  2. Deploy a Rancher App through the Rancher WebUI.
  3. Delete the Rancher App.
  4. kubectl get pods --all-namespaces
  5. See that Pods from the deleted Rancher App are still in Running state and remain like that.

Expected behavior:

Pods of the deleted Rancher App also get deleted.

Actual behavior:

Pods of the deleted Rancher App do not get deleted.

Additional context / logs:

Cluster was imported quite a while ago. This issue wasn't discovered too early, because there are more than enough resources on the server and it is not used that frequently.

Workaround

Currently, the only way to delete the orphaned pods is this:

kubectl delete all --all --namespace=failed-namespace --grace-period=0 --force

This command has to be run twice per namespace.

@brandond
Copy link
Member

Are you perhaps deleting the deployment manually instead of deleting the helm chart custom resource that the deployment comes from? Can you provide any more information on the actual charts/deployments/pods that exist on your cluster before and after you try to delete things? Showing what resources exist, not just the commands you're running to delete them, would be helpful.

@theAkito
Copy link
Author

theAkito commented Jul 23, 2020

@brandond

Thanks for replying.

  1. I am deleting the Helm Chart from the "Apps" in the Rancher UI.
  2. If I don't delete the Helm Chart from the Rancher UI, there are all things that are expected from the Chart, like Deployments, ConfigMaps, etc. If I delete the Helm Chart from the Rancher UI, then I have to use the workaround provided above to force delete the remaining Deployments + ReplicaSets, even though the latter is not part of the original Helm Chart. It does not contain any ReplicaSets. I don't know if these are usually generated automatically and therefore to be expected.
  3. After deleting aforementioned resources, I have to run the workaround again, to delete the remaining pods, that apparently restarted. The ReplicaSets are already gone. After the second run, all the resources from the given namespace are finally gone for good.

Here an example of the workaround in action, after instigating the deletion of the Helm Chart of the Rancher UI, to delete the orphan objects:

> kubectl delete all --all --namespace=app-230-rc15 --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "app-proxy-6dbc55dc7f-wb5x6" force deleted
pod "main-cloud-print-69cbd54cc4-fg6gs" force deleted
pod "database-6859bd588b-vtllq" force deleted
pod "app-765cbf74b8-25s6x" force deleted
pod "app-ovpn-584c45d5f6-592qm" force deleted
pod "main-solutions-boot-57987f5b95-kjvvc" force deleted
replicaset.apps "app-ovpn-584c45d5f6" force deleted
replicaset.apps "main-solutions-boot-57987f5b95" force deleted
replicaset.apps "app-proxy-6dbc55dc7f" force deleted
replicaset.apps "main-cloud-print-69cbd54cc4" force deleted
replicaset.apps "database-6859bd588b" force deleted
replicaset.apps "app-765cbf74b8" force deleted
> kubectl delete all --all --namespace=app-230-rc15 --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "app-proxy-6dbc55dc7f-8cpxw" force deleted
pod "main-cloud-print-69cbd54cc4-g867s" force deleted
pod "database-6859bd588b-9zlcc" force deleted
pod "main-solutions-boot-57987f5b95-phx8n" force deleted
pod "app-765cbf74b8-g2fk4" force deleted
pod "app-ovpn-584c45d5f6-vrlrc" force deleted

This is what I need to do after supposedly deleting the Helm Chart from the "Apps" section in the Rancher UI. If I don't do that, they run invisibly to Rancher, forever.

@brandond
Copy link
Member

Can you provide any more information on the actual charts/deployments/pods that exist on your cluster before and after you try to delete things?

I still don't have any info to show what state things are in before you force-delete them. Is the helm chart actually gone? Are other resources deleting but stuck on a finalizer?

@theAkito
Copy link
Author

theAkito commented Jul 23, 2020

The objects in the given namespace, after deleting the corresponding Helm Chart in the Rancher UI:

> kubectl get all -n app-230-rc15-1
NAME                                      READY   STATUS    RESTARTS   AGE
pod/app-proxy-6666ddb967-qqgr5            1/1     Running   0          2m54s
pod/main-cloud-print-5ff85758c8-vbgzv      1/1     Running   0          2m54s
pod/app-666df48c46-7c8nz                  1/1     Running   0          2m54s
pod/main-solutions-boot-56586dd7cc-722zk   1/1     Running   0          2m54s
pod/database-7965d5745b-czz58             1/1     Running   0          2m54s
pod/app-ovpn-b5c57f748-sglkq              1/1     Running   0          2m54s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/app-proxy-6666ddb967            1         1         1       2m54s
replicaset.apps/main-cloud-print-5ff85758c8      1         1         1       2m54s
replicaset.apps/app-666df48c46                  1         1         1       2m54s
replicaset.apps/main-solutions-boot-56586dd7cc   1         1         1       2m54s
replicaset.apps/database-7965d5745b             1         1         1       2m54s
replicaset.apps/app-ovpn-b5c57f748              1         1         1       2m54s

As you can see, they are running invisibly, as already described in the previous post.

@brandond
Copy link
Member

Can you also kubectl get -A deployments and kubectl get -A helmcharts? If there are deployments left over, can you provide the full output (-o yaml) from one of them?

@theAkito
Copy link
Author

@brandond

kubectl get -A helmcharts only reveals traefik within kube-system.

kubectl get -A deployments shows only the deployments that should be there. So it seems like the pods remain while their deployments vanish, which probably makes it seem like the deployed Charts are actually deleted.

@brandond
Copy link
Member

Okay, so can you show the full replicaset output (kubectl get replicaset -o yaml -n YOURNAMESPACE YOURREPLICASET) so we can see if it has a finalizer or owner reference that should be cleaning it up?

@theAkito
Copy link
Author

Taken from one ReplicaSet I picked, I assume you are seeking for this:

  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: app-proxy
    uid: d0e36770-8c2c-46d8-abfc-cb53b390ea94
  resourceVersion: "5724742"
  selfLink: /apis/apps/v1/namespaces/app-230-rc15-1/replicasets/app-proxy-6666ddb967
  uid: 9f6c74c7-6c5a-468a-ab40-c2e8b6c6cb83

Is blockOwnerDeletion: true the culprit? 🤔

@brandond
Copy link
Member

brandond commented Jul 24, 2020

Right, so with that ownerReference on there, with blockOwnerDeletion, it should not be possible delete the parent Deployment until the child ReplicaSet has been deleted. The child Pods will have similar ownerReferences back to the ReplicaSet, which is supposed to be how everything gets cleaned up automatically when you delete the Deployment. This is all just core Kubernetes functionality, nothing unique to Helm Charts or k3s.

Remind me how are you removing the deployments? Are you setting --cascade=false on your delete command, or setting any other option that might cause orphan resources?

@theAkito
Copy link
Author

theAkito commented Jul 25, 2020

This is all just core Kubernetes functionality, nothing unique to Helm Charts or k3s.

Then why do the exact same Charts get deleted properly on RKE clusters? Why is the behaviour different? There has to be a difference or uniqueness to the matter or else the deletion would behave the same, too.

with blockOwnerDeletion, it should not be possible delete the parent Deployment until the child ReplicaSet has been deleted

Is this part of the core Kubernetes functionality? Why is this set? See above question on why the behaviour is different with RKE.

Remind me how are you removing the deployments? Are you setting --cascade=false on your delete command, or setting any other option that might cause orphan resources?

All I'm doing is to delete the Helm Chart from the Apps section, when being in a cluster's project, in the Rancher UI. Which is also the same place where the Charts are launched from. The CLI is only used, if necessary.

Doing this on RKE works, without resulting orphaned objects.
Doing this on a k3s based cluster creates orphaned objects.
So, again, something is different or even incorrect.


Reading through your comment again, I am not sure, if I interpreted it correctly. I thought, you were saying that the deletion not being successful here is just normal Kubernetes behaviour. However, perhaps you just explained each part of the core functionality without deeming it "normal" or "expected" in such a situation.

@brandond
Copy link
Member

brandond commented Jul 25, 2020

Yes, I was saying that the cascading deletion via ownerReference is core Kubernetes behavior. Which version of Rancher are you using? Can you replicate this same behavior on k3s 1.18.6?

@theAkito
Copy link
Author

theAkito commented Jul 27, 2020

@brandond

Rancher v2.4.5
User Interface v2.4.28
Helm v2.16.8-rancher1
Machine v0.15.0-rancher43

Independent of this issue, the cluster was upgraded to v1.18.6+k3s1 and the issue still persists, after setting up and trying to delete a test Rancher Chart.


Testing setup and removal of wordpress chart, without any modifications:
Before deletion:

> kubectl get all -n wordpress   
NAME                            READY   STATUS    RESTARTS   AGE
pod/svclb-wordpress-dk2j2       0/2     Pending   0          21s
pod/wordpress-mariadb-0         0/1     Running   0          21s
pod/wordpress-88b8bd898-mfjz4   0/1     Running   0          21s

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/wordpress-mariadb   ClusterIP      10.43.98.144    <none>        3306/TCP                     21s
service/wordpress           LoadBalancer   10.43.205.218   <pending>     80:30485/TCP,443:32451/TCP   21s

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-wordpress   1         1         0       1            0           <none>          21s

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress   0/1     1            0           21s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-88b8bd898   1         1         0       21s

NAME                                 READY   AGE
statefulset.apps/wordpress-mariadb   0/1     21s

After deletion:

NAME                            READY   STATUS    RESTARTS   AGE
pod/svclb-wordpress-dk2j2       0/2     Pending   0          43s
pod/wordpress-88b8bd898-mfjz4   0/1     Running   0          43s
pod/wordpress-mariadb-0         1/1     Running   0          43s

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-wordpress   1         1         0       1            0           <none>          43s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-88b8bd898   1         1         0       43s

After first iteration of the workaround:

NAME                            READY   STATUS    RESTARTS   AGE
pod/svclb-wordpress-28s5k       0/2     Pending   0          3s
pod/wordpress-88b8bd898-q67mq   0/1     Pending   0          3s

After second iteration of the workaround:

No resources found in wordpress namespace.

@brandond
Copy link
Member

brandond commented Jul 28, 2020

Hey, just out of curiosity, can you try something?

  1. Run kubectl get lease -n kube-system kube-controller-manager
  2. Log in to the node indicated
  3. Stop k3s
  4. Wait 30 seconds
  5. Start k3s

This should force the kube-controller-manager to run on another node; I'm curious if something is going on with the current node that's causing it to not garbage collect properly.

@theAkito
Copy link
Author

@brandond

Would gladly try it out, but this cluster is mainly intended for testing newly developed stuff, so it only has 1 node, as more aren't necessary.

@jonstelly
Copy link

I believe I'm seeing the same behavior. I'm on a single master, 6 agent cluster running v1.18.6+k3s1, helm 3.2.4.

I've got a helm chart for an app that I've been deploying via helm 3.X for a few months now and helm upgrades and deletes have worked as expected but they are not now. I think this coincided with my upgrade to 1.18.6. My Continuous Integration builds/tests run helm upgrade for my dev/test environments but also run helm delete for tearing down my short-lived automated acceptance testing environments.

I've seen issues with both the upgrades and deletes not removing the old replicasets.

The replicaset and pod ownerReferences look correct to me. After an upgrade where I've got the revisionHistoryLimit set to 1, I see the new replicaset, and a single old replicaset with desired, current, ready set to 0 as I'd expect but then there will be another, older replicaset from the same deployment with desired count set to my non-zero value.

Any suggestions on what additional information or logs I could provide to help diagnose the issue? I tried the k3s service restart mentioned above but since I'm single-master and the controller manager was running on that master, it didn't get failed over to a different node.

on the replicaset:

  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: my-web-thing
    uid: 0ccba6b8-b66e-4b84-bf93-b281b5593c94

on the pod:

  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: my-web-thing-6bb9f99cfb
    uid: 3d0dfca2-d062-406c-86ed-4074af1c5849

kubectl get rs

NAME                              DESIRED   CURRENT   READY   AGE
my-web-thing-78f88ddf5c           3         3         3       20h
my-web-thing-6c74787458           0         0         0       20h

@brandond
Copy link
Member

@jonstelly the deployment referred to by the ReplicaSet is gone?

@jonstelly
Copy link

@brandond

For Deletes: No, the deployment gets deleted, so ReplicaSets and Pods are remaining
For Upgrades: I had to clean everything up this morning so I'm not sure. Let me try running upgrades this afternoon/evening and I'll report back. I know the deployment name is the same across my helm upgrades but I don't know if Helm updates an existing deployment or deletes and creates a new deployment (new GUID/kubernetes object).

@brandond
Copy link
Member

Without the full info on the pods its hard to tell what's supposed to belong to what. We'd want to see things hanging around with an ownerReferences entry pointing at an object that doesn't exist.

@jonstelly
Copy link

An example of lingering pod+replicaset after a helm delete. I did some search-and-replace to remove sensitive names+values+volumes but I'm pretty sure everything still lines up.

I just noticed that helm 3.3 was released yesterday and that's what my CI process is picking up (latest stable) but I'm running 3.2.4 locally. I'll upgrade to 3.3 just to check that out, This issue doesn't happen for me in my Azure AKS environments (running k8s 1.17 instead of 1.18)

Command being run in my Azure Devops Pipeline is: /azp/agent/_work/_tool/helm/3.3.0/x64/linux-amd64/helm delete foo-alpha-acceptance-0812-203256

and the output is simply:

release "foo-alpha-acceptance-0812-203256" uninstalled

The Deployment that the ReplicaSet Owner refers to does not exist.

ReplicaSet

kind: ReplicaSet
apiVersion: apps/v1
metadata:
  name: foo-alpha-acceptance-0812-203256-web-57b4bccd
  namespace: default
  selfLink: >-
    /apis/apps/v1/namespaces/default/replicasets/foo-alpha-acceptance-0812-203256-web-57b4bccd
  uid: f88b7ef6-307a-4116-91c5-d3859d17f8d7
  resourceVersion: '28861001'
  generation: 1
  creationTimestamp: '2020-08-12T20:33:45Z'
  labels:
    app.kubernetes.io/instance: foo-alpha-acceptance-0812-203256
    app.kubernetes.io/name: foo-web
    pod-template-hash: 57b4bccd
  annotations:
    deployment.kubernetes.io/desired-replicas: '1'
    deployment.kubernetes.io/max-replicas: '2'
    deployment.kubernetes.io/revision: '1'
    meta.helm.sh/release-name: foo-alpha-acceptance-0812-203256
    meta.helm.sh/release-namespace: default
  ownerReferences:
    - apiVersion: apps/v1
      kind: Deployment
      name: foo-alpha-acceptance-0812-203256-web
      uid: 77e2b9b2-dcf6-44ce-9ce4-e35365129772
      controller: true
      blockOwnerDeletion: true
  managedFields:
    - manager: k3s
      operation: Update
      apiVersion: apps/v1
      time: '2020-08-12T20:34:32Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:deployment.kubernetes.io/desired-replicas': {}
            'f:deployment.kubernetes.io/max-replicas': {}
            'f:deployment.kubernetes.io/revision': {}
            'f:meta.helm.sh/release-name': {}
            'f:meta.helm.sh/release-namespace': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/name': {}
            'f:pod-template-hash': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"77e2b9b2-dcf6-44ce-9ce4-e35365129772"}':
              .: {}
              'f:apiVersion': {}
              'f:blockOwnerDeletion': {}
              'f:controller': {}
              'f:kind': {}
              'f:name': {}
              'f:uid': {}
        'f:spec':
          'f:replicas': {}
          'f:selector':
            'f:matchLabels':
              .: {}
              'f:app.kubernetes.io/instance': {}
              'f:app.kubernetes.io/name': {}
              'f:pod-template-hash': {}
          'f:template':
            'f:metadata':
              'f:labels':
                .: {}
                'f:app.kubernetes.io/instance': {}
                'f:app.kubernetes.io/name': {}
                'f:pod-template-hash': {}
            'f:spec':
              'f:containers':
                'k:{"name":"foo-web"}':
                  .: {}
                  'f:env':
                    .: {}
                    'k:{"name":"DOTNET_ENVIRONMENT"}':
                      .: {}
                      'f:name': {}
                      'f:value': {}
                    'k:{"name":"COMPANY__INSTANCE"}':
                      .: {}
                      'f:name': {}
                      'f:value': {}
                  'f:image': {}
                  'f:imagePullPolicy': {}
                  'f:livenessProbe':
                    .: {}
                    'f:failureThreshold': {}
                    'f:httpGet':
                      .: {}
                      'f:path': {}
                      'f:port': {}
                      'f:scheme': {}
                    'f:periodSeconds': {}
                    'f:successThreshold': {}
                    'f:timeoutSeconds': {}
                  'f:name': {}
                  'f:ports':
                    .: {}
                    'k:{"containerPort":80,"protocol":"TCP"}':
                      .: {}
                      'f:containerPort': {}
                      'f:name': {}
                      'f:protocol': {}
                  'f:readinessProbe':
                    .: {}
                    'f:failureThreshold': {}
                    'f:httpGet':
                      .: {}
                      'f:path': {}
                      'f:port': {}
                      'f:scheme': {}
                    'f:periodSeconds': {}
                    'f:successThreshold': {}
                    'f:timeoutSeconds': {}
                  'f:resources': {}
                  'f:terminationMessagePath': {}
                  'f:terminationMessagePolicy': {}
                  'f:volumeMounts':
                    .: {}
                    'k:{"mountPath":"/root/.docker"}':
                      .: {}
                      'f:mountPath': {}
                      'f:name': {}
              'f:dnsPolicy': {}
              'f:restartPolicy': {}
              'f:schedulerName': {}
              'f:securityContext': {}
              'f:serviceAccount': {}
              'f:serviceAccountName': {}
              'f:terminationGracePeriodSeconds': {}
              'f:tolerations': {}
              'f:topologySpreadConstraints':
                .: {}
                'k:{"topologyKey":"node","whenUnsatisfiable":"DoNotSchedule"}':
                  .: {}
                  'f:maxSkew': {}
                  'f:topologyKey': {}
                  'f:whenUnsatisfiable': {}
              'f:volumes': {}
        'f:status':
          'f:availableReplicas': {}
          'f:fullyLabeledReplicas': {}
          'f:observedGeneration': {}
          'f:readyReplicas': {}
          'f:replicas': {}
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: foo-alpha-acceptance-0812-203256
      app.kubernetes.io/name: foo-web
      pod-template-hash: 57b4bccd
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: foo-alpha-acceptance-0812-203256
        app.kubernetes.io/name: foo-web
        pod-template-hash: 57b4bccd
    spec:
      volumes:
      containers:
        - name: foo-web
          image: 'hub.foo.us/foo-web:0.7.10-alpha.921'
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          env:
            - name: COMPANY__INSTANCE
              value: alpha-acceptance-0812-203256
            - name: DOTNET_ENVIRONMENT
              value: alpha
          resources: {}
          volumeMounts:
          livenessProbe:
            httpGet:
              path: /
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: foo-alpha-acceptance-0812-203256
      serviceAccount: foo-alpha-acceptance-0812-203256
      securityContext: {}
      schedulerName: default-scheduler
      tolerations:
        - key: company/ephemeral
          operator: Exists
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: node
          whenUnsatisfiable: DoNotSchedule
status:
  replicas: 1
  fullyLabeledReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  observedGeneration: 1

Pod

kind: Pod
apiVersion: v1
metadata:
  name: foo-alpha-acceptance-0812-203256-web-57b4bccd-wzljt
  generateName: foo-alpha-acceptance-0812-203256-web-57b4bccd-
  namespace: default
  selfLink: >-
    /api/v1/namespaces/default/pods/foo-alpha-acceptance-0812-203256-web-57b4bccd-wzljt
  uid: 889408bd-3e3b-4549-9eb7-e064b2d74dbf
  resourceVersion: '28860998'
  creationTimestamp: '2020-08-12T20:33:45Z'
  labels:
    app.kubernetes.io/instance: foo-alpha-acceptance-0812-203256
    app.kubernetes.io/name: foo-web
    pod-template-hash: 57b4bccd
  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: foo-alpha-acceptance-0812-203256-web-57b4bccd
      uid: f88b7ef6-307a-4116-91c5-d3859d17f8d7
      controller: true
      blockOwnerDeletion: true
  managedFields:
    - manager: k3s
      operation: Update
      apiVersion: v1
      time: '2020-08-12T20:34:32Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:generateName': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/name': {}
            'f:pod-template-hash': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"f88b7ef6-307a-4116-91c5-d3859d17f8d7"}':
              .: {}
              'f:apiVersion': {}
              'f:blockOwnerDeletion': {}
              'f:controller': {}
              'f:kind': {}
              'f:name': {}
              'f:uid': {}
        'f:spec':
          'f:containers':
            'k:{"name":"foo-web"}':
              .: {}
              'f:env':
                .: {}
                'k:{"name":"DOTNET_ENVIRONMENT"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
                'k:{"name":"COMPANY__INSTANCE"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:livenessProbe':
                .: {}
                'f:failureThreshold': {}
                'f:httpGet':
                  .: {}
                  'f:path': {}
                  'f:port': {}
                  'f:scheme': {}
                'f:periodSeconds': {}
                'f:successThreshold': {}
                'f:timeoutSeconds': {}
              'f:name': {}
              'f:ports':
                .: {}
                'k:{"containerPort":80,"protocol":"TCP"}':
                  .: {}
                  'f:containerPort': {}
                  'f:name': {}
                  'f:protocol': {}
              'f:readinessProbe':
                .: {}
                'f:failureThreshold': {}
                'f:httpGet':
                  .: {}
                  'f:path': {}
                  'f:port': {}
                  'f:scheme': {}
                'f:periodSeconds': {}
                'f:successThreshold': {}
                'f:timeoutSeconds': {}
              'f:resources': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
              'f:volumeMounts':
                .: {}
                'k:{"mountPath":"/root/.docker"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
          'f:dnsPolicy': {}
          'f:enableServiceLinks': {}
          'f:restartPolicy': {}
          'f:schedulerName': {}
          'f:securityContext': {}
          'f:serviceAccount': {}
          'f:serviceAccountName': {}
          'f:terminationGracePeriodSeconds': {}
          'f:tolerations': {}
          'f:topologySpreadConstraints':
            .: {}
            'k:{"topologyKey":"node","whenUnsatisfiable":"DoNotSchedule"}':
              .: {}
              'f:maxSkew': {}
              'f:topologyKey': {}
              'f:whenUnsatisfiable': {}
          'f:volumes': {}
        'f:status':
          'f:conditions':
            'k:{"type":"ContainersReady"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Initialized"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Ready"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
          'f:containerStatuses': {}
          'f:hostIP': {}
          'f:phase': {}
          'f:podIP': {}
          'f:podIPs':
            .: {}
            'k:{"ip":"10.42.2.174"}':
              .: {}
              'f:ip': {}
          'f:startTime': {}
spec:
  volumes:
  containers:
    - name: foo-web
      image: 'hub.foo.us/foo-web:0.7.10-alpha.921'
      ports:
        - name: http
          containerPort: 80
          protocol: TCP
      env:
        - name: COMPANY__INSTANCE
          value: alpha-acceptance-0812-203256
        - name: DOTNET_ENVIRONMENT
          value: alpha
      resources: {}
      volumeMounts:
      livenessProbe:
        httpGet:
          path: /
          port: http
          scheme: HTTP
        timeoutSeconds: 1
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3
      readinessProbe:
        httpGet:
          path: /
          port: http
          scheme: HTTP
        timeoutSeconds: 1
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: Always
  restartPolicy: Always
  terminationGracePeriodSeconds: 30
  dnsPolicy: ClusterFirst
  serviceAccountName: foo-alpha-acceptance-0812-203256
  serviceAccount: foo-alpha-acceptance-0812-203256
  nodeName: pi4b
  securityContext: {}
  schedulerName: default-scheduler
  tolerations:
    - key: company/ephemeral
      operator: Exists
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
  priority: 0
  enableServiceLinks: true
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: node
      whenUnsatisfiable: DoNotSchedule
status:
  phase: Running
  conditions:
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-08-12T20:33:45Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-08-12T20:34:32Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-08-12T20:34:32Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2020-08-12T20:33:45Z'
  hostIP: 192.168.8.21
  podIP: 10.42.2.174
  podIPs:
    - ip: 10.42.2.174
  startTime: '2020-08-12T20:33:45Z'
  containerStatuses:
    - name: foo-web
      state:
        running:
          startedAt: '2020-08-12T20:34:12Z'
      lastState: {}
      ready: true
      restartCount: 0
      image: 'hub.foo.us/foo-web:0.7.10-alpha.921'
      imageID: >-
        hub.foo.us/foo-web@sha256:086c977aa8149abeff094e59bb8af3ae6ae1f0ed8d15c3de5c382c579b82cf60
      containerID: >-
        containerd://c35e5bda27c7914e21317991081cbbc80241246f567ed948ffc0f24502158fb3
      started: true
  qosClass: BestEffort

@jonstelly
Copy link

Below is the output of helm delete ... --debug.

Tomorrow I'll try and create an isolated repro for this issue using k3d. At this point it could be a bug in the upstream kubernetes code, something in k3s and the patch-sets, something in helm, or something strange about my chart. I'll report back with findings from the repro with whatever they show.

/azp/agent/_work/1/s/deployment/charts/foo> /azp/agent/_work/_tool/helm/3.3.0/x64/linux-amd64/helm delete foo-alpha-acceptance-0812-232201 --debug
uninstall.go:92: [debug] uninstall: Deleting foo-alpha-acceptance-0812-232201
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" Ingress
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" Service
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201-worker" Deployment
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201-web" Deployment
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" RoleBinding
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" Role
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" ClusterRoleBinding
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" ClusterRole
client.go:254: [debug] Starting delete for "foo-alpha-acceptance-0812-232201" ServiceAccount
uninstall.go:129: [debug] purge requested for foo-alpha-acceptance-0812-232201
release "foo-alpha-acceptance-0812-232201" uninstalled

@brandond
Copy link
Member

brandond commented Aug 13, 2020

Hmm iirc purge will force deletion, even if there are things like dangling owner reference that should block it. I wonder why it's doing that.

helm/helm#5804

@rez0n
Copy link

rez0n commented Aug 13, 2020

Hi, I was faced with a relevant issue. Out-of-box my k3s operate normally but after dozen deployments/deletions something went wrong.
After I execute kubectl delete -f wordpress.yml command not execute successfully, just hung.
image

kubectl -n wp get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/wordpress-mariadb-79cd76b69f-6zz7l   1/1     Running   0          93m  ### Another deployment
pod/wordpress-6d974d7b7c-w6qmt           1/1     Running   0          87m

NAME                        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/wordpress-mariadb   ClusterIP   None         <none>        3306/TCP   93m  ### Another deployment

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress-mariadb   1/1     1            1           93m  ### Another deployment

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-mariadb-79cd76b69f   1         1         1       93m  ### Another deployment
replicaset.apps/wordpress-6d974d7b7c           1         1         1       87m

All resources was deleted exclude replicaset. Pod recreating and recreating until I execute kubectl -n wp delete rs wordpress-6d974d7b7c
I have tested this manifests on the "brand new" cluster and it works perfectly. It is default Wordpress manifest from k8s docs.
Any help to debug? Seems RS was become protected from deletion.

@theAkito
Copy link
Author

Given all this information, this issue should have some kind/bug label, already. 3 different people now confirmed the exact same issue.

@jonstelly
Copy link

jonstelly commented Aug 13, 2020

Hmm iirc purge will force deletion, even if there are things like dangling owner reference that should block it. I wonder why it's doing that.

helm/helm#5804

Yeah, the purge seems to be to clean up the helm releases and history on uninstall like the issue you linked, but it doesn't appear to be the same thing as kubectl delete --force. Debug log message comes from here.

I've been testing with the following:

Commands

  1. Create k3s cluster: k3d cluster create k3s-orphan-testing
  2. Create helm chart: helm create orphan-testing
  3. Modify chart's values.yaml to use 3 replicas: replicaCount: 3
  4. Running the below powershell:
$iterations=3;
1..$iterations | % {
    $time = [DateTime]::Now.ToString("MMdd-HHmmss");
    helm install "app-$time" ./orphan-testing --wait;
    helm upgrade "app-$time" ./orphan-testing --set replicaCount=2 --wait;
    helm delete  "app-$time";
}

Findings

  1. I DO NOT see the problem with the ReplicaSet or Pods remaining after deleting the helm deployment on the new k3d instance/cluster.
  2. I DO see the problem on my existing cluster. After running the above commands I'm left with 6 orphaned pods and 3 orphaned replica sets.

An additional interesting note: If I delete the orphaned ReplicaSet, that doesn't clean up the Pods either. I have to manually delete both the ReplicaSets and the Pods. And if I delete the Pods before the ReplicaSets, the pods get recreated.

But from this the problem seems to be something cluster-specific. I'm going to try rebooting all nodes on my cluster, then running the above test and if that doesn't fix it, I'll do OS updates and trying again. I'll dig into the k3s logs too. To capture node version/kernel info before I change anything, here are the nodes from my cluster:

EDIT - After Reboot of master, no improvement, same behavior with lingering RS and Pods. No updates for Ubuntu, I'm current there.

NAME     STATUS   ROLES    AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
pi4b     Ready    <none>   71d   v1.18.6+k3s1   192.168.8.21   <none>        Ubuntu 20.04.1 LTS   5.4.0-1015-raspi   containerd://1.3.3-k3s2
bug      Ready    <none>   31d   v1.18.6+k3s1   192.168.8.5    <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic   containerd://1.3.3-k3s2
pi4a     Ready    <none>   71d   v1.18.6+k3s1   192.168.8.20   <none>        Ubuntu 20.04.1 LTS   5.4.0-1015-raspi   containerd://1.3.3-k3s2
pi4c     Ready    <none>   71d   v1.18.6+k3s1   192.168.8.22   <none>        Ubuntu 20.04.1 LTS   5.4.0-1015-raspi   containerd://1.3.3-k3s2
oldtop   Ready    master   71d   v1.18.6+k3s1   192.168.8.3    <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic   containerd://1.3.3-k3s2
mac      Ready    <none>   71d   v1.18.6+k3s1   192.168.8.4    <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic   containerd://1.3.3-k3s2
pi4d     Ready    <none>   71d   v1.18.6+k3s1   192.168.8.23   <none>        Ubuntu 20.04.1 LTS   5.4.0-1015-raspi   containerd://1.3.3-k3s2

Findings from my cluster are below. This is the state after running the above. It's worth noting that the upgrade to reduce the replicaCount worked:

ReplicaSets

NAME                                                          DESIRED   CURRENT   READY   AGE
app-0813-143421-k3s-orphan-5f67b99c46                         2         2         2       16m
app-0813-143450-k3s-orphan-67d6668958                         2         2         2       15m
app-0813-143504-k3s-orphan-5fcc548bd                          2         2         2       15m

Pods

NAME                                              READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
app-0813-143421-k3s-orphan-5f67b99c46-hphfg       1/1     Running   0          18m     10.42.1.195   pi4a     <none>           <none>
app-0813-143421-k3s-orphan-5f67b99c46-tz46p       1/1     Running   0          18m     10.42.3.113   pi4c     <none>           <none>
app-0813-143450-k3s-orphan-67d6668958-8ffkt       1/1     Running   0          18m     10.42.0.160   oldtop   <none>           <none>
app-0813-143450-k3s-orphan-67d6668958-dblhm       1/1     Running   0          18m     10.42.2.185   pi4b     <none>           <none>
app-0813-143504-k3s-orphan-5fcc548bd-4fpcq        1/1     Running   0          17m     10.42.1.197   pi4a     <none>           <none>
app-0813-143504-k3s-orphan-5fcc548bd-59hwb        1/1     Running   0          17m     10.42.0.161   oldtop   <none>           <none>

Here are the events from my existing cluster:

LAST SEEN   TYPE      REASON              OBJECT                                                                   SUBOBJECT                     SOURCE                  MESSAGE                                                                                                                                                                                                                                                      FIRST SEEN   COUNT   NAME
9m30s       Normal    ScalingReplicaSet   deployment/app-0813-143421-k3s-orphan                                                                  deployment-controller   Scaled up replica set app-0813-143421-k3s-orphan-5f67b99c46 to 3                                                                                                                                                                                             9m30s        1       app-0813-143421-k3s-orphan.162aeadfe60b79d2
9m30s       Normal    SuccessfulCreate    replicaset/app-0813-143421-k3s-orphan-5f67b99c46                                                       replicaset-controller   Created pod: app-0813-143421-k3s-orphan-5f67b99c46-hphfg                                                                                                                                                                                                     9m30s        1       app-0813-143421-k3s-orphan-5f67b99c46.162aeadfe7449384
9m30s       Normal    SuccessfulCreate    replicaset/app-0813-143421-k3s-orphan-5f67b99c46                                                       replicaset-controller   Created pod: app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                                                                                                                                                                                                     9m30s        1       app-0813-143421-k3s-orphan-5f67b99c46.162aeadfe915160e
<unknown>   Normal    Scheduled           pod/app-0813-143421-k3s-orphan-5f67b99c46-hphfg                                                        default-scheduler       Successfully assigned default/app-0813-143421-k3s-orphan-5f67b99c46-hphfg to pi4a                                                                                                                                                                            <unknown>    0       app-0813-143421-k3s-orphan-5f67b99c46-hphfg.162aeadfe9a00bf4
9m30s       Normal    SuccessfulCreate    replicaset/app-0813-143421-k3s-orphan-5f67b99c46                                                       replicaset-controller   Created pod: app-0813-143421-k3s-orphan-5f67b99c46-tz46p                                                                                                                                                                                                     9m30s        1       app-0813-143421-k3s-orphan-5f67b99c46.162aeadfea4c16d7
<unknown>   Normal    Scheduled           pod/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                                                        default-scheduler       Successfully assigned default/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw to pi4b                                                                                                                                                                            <unknown>    0       app-0813-143421-k3s-orphan-5f67b99c46-m8vgw.162aeadfeb3a389f
<unknown>   Normal    Scheduled           pod/app-0813-143421-k3s-orphan-5f67b99c46-tz46p                                                        default-scheduler       Successfully assigned default/app-0813-143421-k3s-orphan-5f67b99c46-tz46p to pi4c                                                                                                                                                                            <unknown>    0       app-0813-143421-k3s-orphan-5f67b99c46-tz46p.162aeadfec2fab6c
9m29s       Normal    Pulling             pod/app-0813-143421-k3s-orphan-5f67b99c46-hphfg                          spec.containers{k3s-orphan}   kubelet, pi4a           Pulling image "nginx:1.16.0"                                                                                                                                                                                                                                 9m29s        1       app-0813-143421-k3s-orphan-5f67b99c46-hphfg.162aeae0182ccad6
9m29s       Normal    Pulling             pod/app-0813-143421-k3s-orphan-5f67b99c46-tz46p                          spec.containers{k3s-orphan}   kubelet, pi4c           Pulling image "nginx:1.16.0"                                                                                                                                                                                                                                 9m29s        1       app-0813-143421-k3s-orphan-5f67b99c46-tz46p.162aeae01b224b3f
9m29s       Normal    Pulling             pod/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                          spec.containers{k3s-orphan}   kubelet, pi4b           Pulling image "nginx:1.16.0"                                                                                                                                                                                                                                 9m29s        1       app-0813-143421-k3s-orphan-5f67b99c46-m8vgw.162aeae01b41de78
9m17s       Normal    Pulled              pod/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                          spec.containers{k3s-orphan}   kubelet, pi4b           Successfully pulled image "nginx:1.16.0"                                                                                                                                                                                                                     9m17s        1       app-0813-143421-k3s-orphan-5f67b99c46-m8vgw.162aeae306b93762
9m17s       Normal    Pulled              pod/app-0813-143421-k3s-orphan-5f67b99c46-hphfg                          spec.containers{k3s-orphan}   kubelet, pi4a           Successfully pulled image "nginx:1.16.0"                                                                                                                                                                                                                     9m17s        1       app-0813-143421-k3s-orphan-5f67b99c46-hphfg.162aeae30cf5fe26
9m16s       Normal    Pulled              pod/app-0813-143421-k3s-orphan-5f67b99c46-tz46p                          spec.containers{k3s-orphan}   kubelet, pi4c           Successfully pulled image "nginx:1.16.0"                                                                                                                                                                                                                     9m16s        1       app-0813-143421-k3s-orphan-5f67b99c46-tz46p.162aeae330d86873
9m14s       Normal    Created             pod/app-0813-143421-k3s-orphan-5f67b99c46-tz46p                          spec.containers{k3s-orphan}   kubelet, pi4c           Created container k3s-orphan                                                                                                                                                                                                                                 9m14s        1       app-0813-143421-k3s-orphan-5f67b99c46-tz46p.162aeae398c5aa64
9m14s       Normal    Started             pod/app-0813-143421-k3s-orphan-5f67b99c46-tz46p                          spec.containers{k3s-orphan}   kubelet, pi4c           Started container k3s-orphan                                                                                                                                                                                                                                 9m14s        1       app-0813-143421-k3s-orphan-5f67b99c46-tz46p.162aeae3a5ec8784
9m13s       Normal    Created             pod/app-0813-143421-k3s-orphan-5f67b99c46-hphfg                          spec.containers{k3s-orphan}   kubelet, pi4a           Created container k3s-orphan                                                                                                                                                                                                                                 9m13s        1       app-0813-143421-k3s-orphan-5f67b99c46-hphfg.162aeae4097869a4
9m12s       Normal    Started             pod/app-0813-143421-k3s-orphan-5f67b99c46-hphfg                          spec.containers{k3s-orphan}   kubelet, pi4a           Started container k3s-orphan                                                                                                                                                                                                                                 9m12s        1       app-0813-143421-k3s-orphan-5f67b99c46-hphfg.162aeae4182313a9
9m12s       Normal    Created             pod/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                          spec.containers{k3s-orphan}   kubelet, pi4b           Created container k3s-orphan                                                                                                                                                                                                                                 9m12s        1       app-0813-143421-k3s-orphan-5f67b99c46-m8vgw.162aeae43cdf3054
9m11s       Normal    Started             pod/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                          spec.containers{k3s-orphan}   kubelet, pi4b           Started container k3s-orphan                                                                                                                                                                                                                                 9m11s        1       app-0813-143421-k3s-orphan-5f67b99c46-m8vgw.162aeae44a2d4971
9m5s        Normal    ScalingReplicaSet   deployment/app-0813-143421-k3s-orphan                                                                  deployment-controller   Scaled down replica set app-0813-143421-k3s-orphan-5f67b99c46 to 2                                                                                                                                                                                           9m5s         1       app-0813-143421-k3s-orphan.162aeae5adc715cf
9m5s        Normal    SuccessfulDelete    replicaset/app-0813-143421-k3s-orphan-5f67b99c46                                                       replicaset-controller   Deleted pod: app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                                                                                                                                                                                                     9m5s         1       app-0813-143421-k3s-orphan-5f67b99c46.162aeae5af173779
9m5s        Normal    Killing             pod/app-0813-143421-k3s-orphan-5f67b99c46-m8vgw                          spec.containers{k3s-orphan}   kubelet, pi4b           Stopping container k3s-orphan                                                                                                                                                                                                                                9m5s         1       app-0813-143421-k3s-orphan-5f67b99c46-m8vgw.162aeae5af84c3e6
9m2s        Normal    ScalingReplicaSet   deployment/app-0813-143450-k3s-orphan                                                                  deployment-controller   Scaled up replica set app-0813-143450-k3s-orphan-67d6668958 to 3                                                                                                                                                                                             9m2s         1       app-0813-143450-k3s-orphan.162aeae6821c5923
9m2s        Normal    SuccessfulCreate    replicaset/app-0813-143450-k3s-orphan-67d6668958                                                       replicaset-controller   Created pod: app-0813-143450-k3s-orphan-67d6668958-w8m66                                                                                                                                                                                                     9m2s         1       app-0813-143450-k3s-orphan-67d6668958.162aeae68334aaeb
9m2s        Normal    SuccessfulCreate    replicaset/app-0813-143450-k3s-orphan-67d6668958                                                       replicaset-controller   Created pod: app-0813-143450-k3s-orphan-67d6668958-dblhm                                                                                                                                                                                                     9m2s         1       app-0813-143450-k3s-orphan-67d6668958.162aeae6848ed4b7
<unknown>   Normal    Scheduled           pod/app-0813-143450-k3s-orphan-67d6668958-w8m66                                                        default-scheduler       Successfully assigned default/app-0813-143450-k3s-orphan-67d6668958-w8m66 to pi4a                                                                                                                                                                            <unknown>    0       app-0813-143450-k3s-orphan-67d6668958-w8m66.162aeae684fe3c3a
9m2s        Normal    SuccessfulCreate    replicaset/app-0813-143450-k3s-orphan-67d6668958                                                       replicaset-controller   Created pod: app-0813-143450-k3s-orphan-67d6668958-8ffkt                                                                                                                                                                                                     9m2s         1       app-0813-143450-k3s-orphan-67d6668958.162aeae6855e476f
<unknown>   Normal    Scheduled           pod/app-0813-143450-k3s-orphan-67d6668958-dblhm                                                        default-scheduler       Successfully assigned default/app-0813-143450-k3s-orphan-67d6668958-dblhm to pi4b                                                                                                                                                                            <unknown>    0       app-0813-143450-k3s-orphan-67d6668958-dblhm.162aeae686b13897
<unknown>   Normal    Scheduled           pod/app-0813-143450-k3s-orphan-67d6668958-8ffkt                                                        default-scheduler       Successfully assigned default/app-0813-143450-k3s-orphan-67d6668958-8ffkt to oldtop                                                                                                                                                                          <unknown>    0       app-0813-143450-k3s-orphan-67d6668958-8ffkt.162aeae687abe273
9m1s        Normal    Pulling             pod/app-0813-143450-k3s-orphan-67d6668958-8ffkt                          spec.containers{k3s-orphan}   kubelet, oldtop         Pulling image "nginx:1.16.0"                                                                                                                                                                                                                                 9m1s         1       app-0813-143450-k3s-orphan-67d6668958-8ffkt.162aeae6a6936216
9m1s        Normal    Pulled              pod/app-0813-143450-k3s-orphan-67d6668958-w8m66                          spec.containers{k3s-orphan}   kubelet, pi4a           Container image "nginx:1.16.0" already present on machine                                                                                                                                                                                                    9m1s         1       app-0813-143450-k3s-orphan-67d6668958-w8m66.162aeae6b2f87809
9m1s        Normal    Pulled              pod/app-0813-143450-k3s-orphan-67d6668958-dblhm                          spec.containers{k3s-orphan}   kubelet, pi4b           Container image "nginx:1.16.0" already present on machine                                                                                                                                                                                                    9m1s         1       app-0813-143450-k3s-orphan-67d6668958-dblhm.162aeae6ba6979a5
9m1s        Normal    Created             pod/app-0813-143450-k3s-orphan-67d6668958-w8m66                          spec.containers{k3s-orphan}   kubelet, pi4a           Created container k3s-orphan                                                                                                                                                                                                                                 9m1s         1       app-0813-143450-k3s-orphan-67d6668958-w8m66.162aeae6c8f8f7e6
9m1s        Normal    Created             pod/app-0813-143450-k3s-orphan-67d6668958-dblhm                          spec.containers{k3s-orphan}   kubelet, pi4b           Created container k3s-orphan                                                                                                                                                                                                                                 9m1s         1       app-0813-143450-k3s-orphan-67d6668958-dblhm.162aeae6d0b9aa49
9m1s        Normal    Started             pod/app-0813-143450-k3s-orphan-67d6668958-w8m66                          spec.containers{k3s-orphan}   kubelet, pi4a           Started container k3s-orphan                                                                                                                                                                                                                                 9m1s         1       app-0813-143450-k3s-orphan-67d6668958-w8m66.162aeae6d54f3b85
9m          Normal    Started             pod/app-0813-143450-k3s-orphan-67d6668958-dblhm                          spec.containers{k3s-orphan}   kubelet, pi4b           Started container k3s-orphan                                                                                                                                                                                                                                 9m           1       app-0813-143450-k3s-orphan-67d6668958-dblhm.162aeae6dcedd116
8m58s       Normal    Pulled              pod/app-0813-143450-k3s-orphan-67d6668958-8ffkt                          spec.containers{k3s-orphan}   kubelet, oldtop         Successfully pulled image "nginx:1.16.0"                                                                                                                                                                                                                     8m58s        1       app-0813-143450-k3s-orphan-67d6668958-8ffkt.162aeae76596cd34
8m58s       Normal    Created             pod/app-0813-143450-k3s-orphan-67d6668958-8ffkt                          spec.containers{k3s-orphan}   kubelet, oldtop         Created container k3s-orphan                                                                                                                                                                                                                                 8m58s        1       app-0813-143450-k3s-orphan-67d6668958-8ffkt.162aeae77143fab7
8m58s       Normal    Started             pod/app-0813-143450-k3s-orphan-67d6668958-8ffkt                          spec.containers{k3s-orphan}   kubelet, oldtop         Started container k3s-orphan                                                                                                                                                                                                                                 8m58s        1       app-0813-143450-k3s-orphan-67d6668958-8ffkt.162aeae77610ec11
8m51s       Normal    ScalingReplicaSet   deployment/app-0813-143450-k3s-orphan                                                                  deployment-controller   Scaled down replica set app-0813-143450-k3s-orphan-67d6668958 to 2                                                                                                                                                                                           8m51s        1       app-0813-143450-k3s-orphan.162aeae90b5a0539
8m51s       Normal    SuccessfulDelete    replicaset/app-0813-143450-k3s-orphan-67d6668958                                                       replicaset-controller   Deleted pod: app-0813-143450-k3s-orphan-67d6668958-w8m66                                                                                                                                                                                                     8m51s        1       app-0813-143450-k3s-orphan-67d6668958.162aeae90cc84d7e
8m51s       Normal    Killing             pod/app-0813-143450-k3s-orphan-67d6668958-w8m66                          spec.containers{k3s-orphan}   kubelet, pi4a           Stopping container k3s-orphan                                                                                                                                                                                                                                8m51s        1       app-0813-143450-k3s-orphan-67d6668958-w8m66.162aeae90cb37ce3
8m49s       Warning   Unhealthy           pod/app-0813-143450-k3s-orphan-67d6668958-w8m66                          spec.containers{k3s-orphan}   kubelet, pi4a           Liveness probe failed: Get http://10.42.1.196:80/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)                                                                                                  8m49s        1       app-0813-143450-k3s-orphan-67d6668958-w8m66.162aeae96ec80c02
8m48s       Normal    ScalingReplicaSet   deployment/app-0813-143504-k3s-orphan                                                                  deployment-controller   Scaled up replica set app-0813-143504-k3s-orphan-5fcc548bd to 3                                                                                                                                                                                              8m48s        1       app-0813-143504-k3s-orphan.162aeae9db144c03
8m47s       Normal    SuccessfulCreate    replicaset/app-0813-143504-k3s-orphan-5fcc548bd                                                        replicaset-controller   Created pod: app-0813-143504-k3s-orphan-5fcc548bd-4fpcq                                                                                                                                                                                                      8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd.162aeae9dc849cbb
<unknown>   Normal    Scheduled           pod/app-0813-143504-k3s-orphan-5fcc548bd-4fpcq                                                         default-scheduler       Successfully assigned default/app-0813-143504-k3s-orphan-5fcc548bd-4fpcq to pi4a                                                                                                                                                                             <unknown>    0       app-0813-143504-k3s-orphan-5fcc548bd-4fpcq.162aeae9ddef2f88
8m47s       Normal    SuccessfulCreate    replicaset/app-0813-143504-k3s-orphan-5fcc548bd                                                        replicaset-controller   Created pod: app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                                                                                                                                                                                                      8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd.162aeae9dd915431
<unknown>   Normal    Scheduled           pod/app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                                                         default-scheduler       Successfully assigned default/app-0813-143504-k3s-orphan-5fcc548bd-ppxtj to pi4b                                                                                                                                                                             <unknown>    0       app-0813-143504-k3s-orphan-5fcc548bd-ppxtj.162aeae9df51878b
8m47s       Normal    SuccessfulCreate    replicaset/app-0813-143504-k3s-orphan-5fcc548bd                                                        replicaset-controller   Created pod: app-0813-143504-k3s-orphan-5fcc548bd-59hwb                                                                                                                                                                                                      8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd.162aeae9de9e9050
<unknown>   Normal    Scheduled           pod/app-0813-143504-k3s-orphan-5fcc548bd-59hwb                                                         default-scheduler       Successfully assigned default/app-0813-143504-k3s-orphan-5fcc548bd-59hwb to oldtop                                                                                                                                                                           <unknown>    0       app-0813-143504-k3s-orphan-5fcc548bd-59hwb.162aeae9e0d68d10
8m47s       Normal    Pulled              pod/app-0813-143504-k3s-orphan-5fcc548bd-59hwb                           spec.containers{k3s-orphan}   kubelet, oldtop         Container image "nginx:1.16.0" already present on machine                                                                                                                                                                                                    8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd-59hwb.162aeaea0491cfdd
8m47s       Normal    Created             pod/app-0813-143504-k3s-orphan-5fcc548bd-59hwb                           spec.containers{k3s-orphan}   kubelet, oldtop         Created container k3s-orphan                                                                                                                                                                                                                                 8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd-59hwb.162aeaea070d1ad5
8m47s       Normal    Started             pod/app-0813-143504-k3s-orphan-5fcc548bd-59hwb                           spec.containers{k3s-orphan}   kubelet, oldtop         Started container k3s-orphan                                                                                                                                                                                                                                 8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd-59hwb.162aeaea0af87134
8m47s       Normal    Pulled              pod/app-0813-143504-k3s-orphan-5fcc548bd-4fpcq                           spec.containers{k3s-orphan}   kubelet, pi4a           Container image "nginx:1.16.0" already present on machine                                                                                                                                                                                                    8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd-4fpcq.162aeaea0c4c2598
8m47s       Normal    Pulled              pod/app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                           spec.containers{k3s-orphan}   kubelet, pi4b           Container image "nginx:1.16.0" already present on machine                                                                                                                                                                                                    8m47s        1       app-0813-143504-k3s-orphan-5fcc548bd-ppxtj.162aeaea14519354
8m46s       Normal    Created             pod/app-0813-143504-k3s-orphan-5fcc548bd-4fpcq                           spec.containers{k3s-orphan}   kubelet, pi4a           Created container k3s-orphan                                                                                                                                                                                                                                 8m46s        1       app-0813-143504-k3s-orphan-5fcc548bd-4fpcq.162aeaea23544d92
8m46s       Normal    Created             pod/app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                           spec.containers{k3s-orphan}   kubelet, pi4b           Created container k3s-orphan                                                                                                                                                                                                                                 8m46s        1       app-0813-143504-k3s-orphan-5fcc548bd-ppxtj.162aeaea2a302999
8m46s       Normal    Started             pod/app-0813-143504-k3s-orphan-5fcc548bd-4fpcq                           spec.containers{k3s-orphan}   kubelet, pi4a           Started container k3s-orphan                                                                                                                                                                                                                                 8m46s        1       app-0813-143504-k3s-orphan-5fcc548bd-4fpcq.162aeaea307a80f3
8m46s       Normal    Started             pod/app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                           spec.containers{k3s-orphan}   kubelet, pi4b           Started container k3s-orphan                                                                                                                                                                                                                                 8m46s        1       app-0813-143504-k3s-orphan-5fcc548bd-ppxtj.162aeaea36d48e42
8m33s       Normal    ScalingReplicaSet   deployment/app-0813-143504-k3s-orphan                                                                  deployment-controller   Scaled down replica set app-0813-143504-k3s-orphan-5fcc548bd to 2                                                                                                                                                                                            8m33s        1       app-0813-143504-k3s-orphan.162aeaed4eb4bd4f
8m33s       Normal    SuccessfulDelete    replicaset/app-0813-143504-k3s-orphan-5fcc548bd                                                        replicaset-controller   Deleted pod: app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                                                                                                                                                                                                      8m33s        1       app-0813-143504-k3s-orphan-5fcc548bd.162aeaed500e066c
8m33s       Normal    Killing             pod/app-0813-143504-k3s-orphan-5fcc548bd-ppxtj                           spec.containers{k3s-orphan}   kubelet, pi4b           Stopping container k3s-orphan                                                                                                                                                                                                                                8m33s        1       app-0813-143504-k3s-orphan-5fcc548bd-ppxtj.162aeaed50803884
101s        Warning   FailedMount         pod/app-0813-143421-k3s-orphan-5f67b99c46-tz46p                                                        kubelet, pi4c           MountVolume.SetUp failed for volume "app-0813-143421-k3s-orphan-token-r26n6" : secret "app-0813-143421-k3s-orphan-token-r26n6" not found                                                                                                                     7m53s        11      app-0813-143421-k3s-orphan-5f67b99c46-tz46p.162aeaf68eb620de
93s         Warning   FailedMount         pod/app-0813-143421-k3s-orphan-5f67b99c46-hphfg                                                        kubelet, pi4a           MountVolume.SetUp failed for volume "app-0813-143421-k3s-orphan-token-r26n6" : secret "app-0813-143421-k3s-orphan-token-r26n6" not found                                                                                                                     7m45s        11      app-0813-143421-k3s-orphan-5f67b99c46-hphfg.162aeaf886ee2065
89s         Warning   FailedMount         pod/app-0813-143450-k3s-orphan-67d6668958-8ffkt                                                        kubelet, oldtop         MountVolume.SetUp failed for volume "app-0813-143450-k3s-orphan-token-nhmvn" : secret "app-0813-143450-k3s-orphan-token-nhmvn" not found                                                                                                                     7m41s        11      app-0813-143450-k3s-orphan-67d6668958-8ffkt.162aeaf971600022
87s         Warning   FailedMount         pod/app-0813-143504-k3s-orphan-5fcc548bd-4fpcq                                                         kubelet, pi4a           MountVolume.SetUp failed for volume "app-0813-143504-k3s-orphan-token-5vblj" : secret "app-0813-143504-k3s-orphan-token-5vblj" not found                                                                                                                     7m39s        11      app-0813-143504-k3s-orphan-5fcc548bd-4fpcq.162aeaf9e824faf9
82s         Warning   FailedMount         pod/app-0813-143450-k3s-orphan-67d6668958-dblhm                                                        kubelet, pi4b           MountVolume.SetUp failed for volume "app-0813-143450-k3s-orphan-token-nhmvn" : secret "app-0813-143450-k3s-orphan-token-nhmvn" not found                                                                                                                     7m33s        11      app-0813-143450-k3s-orphan-67d6668958-dblhm.162aeafb1cb8cb4e
64s         Warning   FailedMount         pod/app-0813-143504-k3s-orphan-5fcc548bd-59hwb                                                         kubelet, oldtop         MountVolume.SetUp failed for volume "app-0813-143504-k3s-orphan-token-5vblj" : secret "app-0813-143504-k3s-orphan-token-5vblj" not found                                                                                                                     7m16s        11      app-0813-143504-k3s-orphan-5fcc548bd-59hwb.162aeaff436ce56b

EDIT

And here's the event log from a successful cleanup on my k3d instance for comparison. It seems interesting that the above log reports a SuccessfulDelete for the ReplicaSets, but they don't actually get deleted. And since they don't get deleted, we don't see the Killing event for the pods like we do in the log below.

LAST SEEN   TYPE      REASON              OBJECT                                             SUBOBJECT                     SOURCE                             MESSAGE                                                                                                                                                        FIRST SEEN   COUNT   NAME
81s         Normal    ScalingReplicaSet   deployment/app-0813-151802-k3s-orphan                                            deployment-controller              Scaled up replica set app-0813-151802-k3s-orphan-6c446bf5b4 to 3                                                                                               81s          1       app-0813-151802-k3s-orphan.162aed41747d17a0
81s         Normal    SuccessfulCreate    replicaset/app-0813-151802-k3s-orphan-6c446bf5b4                                 replicaset-controller              Created pod: app-0813-151802-k3s-orphan-6c446bf5b4-2c6px                                                                                                       81s          1       app-0813-151802-k3s-orphan-6c446bf5b4.162aed41755c8ac0
81s         Normal    SuccessfulCreate    replicaset/app-0813-151802-k3s-orphan-6c446bf5b4                                 replicaset-controller              Created pod: app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8                                                                                                       81s          1       app-0813-151802-k3s-orphan-6c446bf5b4.162aed4175c5dbb0
<unknown>   Normal    Scheduled           pod/app-0813-151802-k3s-orphan-6c446bf5b4-2c6px                                  default-scheduler                  Successfully assigned default/app-0813-151802-k3s-orphan-6c446bf5b4-2c6px to k3d-k3s-orphan-server-0                                                           <unknown>    0       app-0813-151802-k3s-orphan-6c446bf5b4-2c6px.162aed4175c5887c
81s         Normal    SuccessfulCreate    replicaset/app-0813-151802-k3s-orphan-6c446bf5b4                                 replicaset-controller              Created pod: app-0813-151802-k3s-orphan-6c446bf5b4-h75jp                                                                                                       81s          1       app-0813-151802-k3s-orphan-6c446bf5b4.162aed4175cc19bc
<unknown>   Normal    Scheduled           pod/app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8                                  default-scheduler                  Successfully assigned default/app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8 to k3d-k3s-orphan-server-0                                                           <unknown>    0       app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8.162aed4176593c34
<unknown>   Normal    Scheduled           pod/app-0813-151802-k3s-orphan-6c446bf5b4-h75jp                                  default-scheduler                  Successfully assigned default/app-0813-151802-k3s-orphan-6c446bf5b4-h75jp to k3d-k3s-orphan-server-0                                                           <unknown>    0       app-0813-151802-k3s-orphan-6c446bf5b4-h75jp.162aed417660562c
81s         Normal    Pulled              pod/app-0813-151802-k3s-orphan-6c446bf5b4-2c6px    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      81s          1       app-0813-151802-k3s-orphan-6c446bf5b4-2c6px.162aed41977cf1e4
81s         Normal    Created             pod/app-0813-151802-k3s-orphan-6c446bf5b4-2c6px    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   81s          1       app-0813-151802-k3s-orphan-6c446bf5b4-2c6px.162aed4199a74bf4
81s         Normal    Started             pod/app-0813-151802-k3s-orphan-6c446bf5b4-2c6px    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   81s          1       app-0813-151802-k3s-orphan-6c446bf5b4-2c6px.162aed419d84247c
81s         Normal    Pulled              pod/app-0813-151802-k3s-orphan-6c446bf5b4-h75jp    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      81s          1       app-0813-151802-k3s-orphan-6c446bf5b4-h75jp.162aed41a0062d08
81s         Normal    Pulled              pod/app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      81s          1       app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8.162aed41a0a1f4b8
80s         Normal    Created             pod/app-0813-151802-k3s-orphan-6c446bf5b4-h75jp    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   80s          1       app-0813-151802-k3s-orphan-6c446bf5b4-h75jp.162aed41a27f63c4
80s         Normal    Created             pod/app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   80s          1       app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8.162aed41a301594c
80s         Normal    Started             pod/app-0813-151802-k3s-orphan-6c446bf5b4-h75jp    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   80s          1       app-0813-151802-k3s-orphan-6c446bf5b4-h75jp.162aed41a68a2ae4
80s         Normal    Started             pod/app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   80s          1       app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8.162aed41a6fcbe24
73s         Normal    ScalingReplicaSet   deployment/app-0813-151802-k3s-orphan                                            deployment-controller              Scaled down replica set app-0813-151802-k3s-orphan-6c446bf5b4 to 2                                                                                             73s          1       app-0813-151802-k3s-orphan.162aed437681a26c
73s         Normal    SuccessfulDelete    replicaset/app-0813-151802-k3s-orphan-6c446bf5b4                                 replicaset-controller              Deleted pod: app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8                                                                                                       73s          1       app-0813-151802-k3s-orphan-6c446bf5b4.162aed4376cecb28
73s         Normal    Killing             pod/app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  73s          1       app-0813-151802-k3s-orphan-6c446bf5b4-lvfb8.162aed4376e86f9c
70s         Normal    Killing             pod/app-0813-151802-k3s-orphan-6c446bf5b4-h75jp    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  70s          1       app-0813-151802-k3s-orphan-6c446bf5b4-h75jp.162aed442a5f0464
70s         Normal    Killing             pod/app-0813-151802-k3s-orphan-6c446bf5b4-2c6px    spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  70s          1       app-0813-151802-k3s-orphan-6c446bf5b4-2c6px.162aed442a7a11c8
69s         Normal    ScalingReplicaSet   deployment/app-0813-151814-k3s-orphan                                            deployment-controller              Scaled up replica set app-0813-151814-k3s-orphan-969f775b5 to 3                                                                                                69s          1       app-0813-151814-k3s-orphan.162aed443d4dfd00
69s         Normal    SuccessfulCreate    replicaset/app-0813-151814-k3s-orphan-969f775b5                                  replicaset-controller              Created pod: app-0813-151814-k3s-orphan-969f775b5-f9mn9                                                                                                        69s          1       app-0813-151814-k3s-orphan-969f775b5.162aed443d93b958
69s         Normal    SuccessfulCreate    replicaset/app-0813-151814-k3s-orphan-969f775b5                                  replicaset-controller              Created pod: app-0813-151814-k3s-orphan-969f775b5-pcd4d                                                                                                        69s          1       app-0813-151814-k3s-orphan-969f775b5.162aed443e27314c
<unknown>   Normal    Scheduled           pod/app-0813-151814-k3s-orphan-969f775b5-f9mn9                                   default-scheduler                  Successfully assigned default/app-0813-151814-k3s-orphan-969f775b5-f9mn9 to k3d-k3s-orphan-server-0                                                            <unknown>    0       app-0813-151814-k3s-orphan-969f775b5-f9mn9.162aed443e35e944
69s         Normal    SuccessfulCreate    replicaset/app-0813-151814-k3s-orphan-969f775b5                                  replicaset-controller              Created pod: app-0813-151814-k3s-orphan-969f775b5-nqhmz                                                                                                        69s          1       app-0813-151814-k3s-orphan-969f775b5.162aed443e279678
<unknown>   Normal    Scheduled           pod/app-0813-151814-k3s-orphan-969f775b5-nqhmz                                   default-scheduler                  Successfully assigned default/app-0813-151814-k3s-orphan-969f775b5-nqhmz to k3d-k3s-orphan-server-0                                                            <unknown>    0       app-0813-151814-k3s-orphan-969f775b5-nqhmz.162aed443ef7b36c
<unknown>   Normal    Scheduled           pod/app-0813-151814-k3s-orphan-969f775b5-pcd4d                                   default-scheduler                  Successfully assigned default/app-0813-151814-k3s-orphan-969f775b5-pcd4d to k3d-k3s-orphan-server-0                                                            <unknown>    0       app-0813-151814-k3s-orphan-969f775b5-pcd4d.162aed443f24c8d4
69s         Normal    Pulled              pod/app-0813-151814-k3s-orphan-969f775b5-f9mn9     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      69s          1       app-0813-151814-k3s-orphan-969f775b5-f9mn9.162aed44625765a0
69s         Normal    Created             pod/app-0813-151814-k3s-orphan-969f775b5-f9mn9     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   69s          1       app-0813-151814-k3s-orphan-969f775b5-f9mn9.162aed4464674c98
69s         Normal    Started             pod/app-0813-151814-k3s-orphan-969f775b5-f9mn9     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   69s          1       app-0813-151814-k3s-orphan-969f775b5-f9mn9.162aed446c78cbc8
68s         Normal    Pulled              pod/app-0813-151814-k3s-orphan-969f775b5-nqhmz     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      68s          1       app-0813-151814-k3s-orphan-969f775b5-nqhmz.162aed44712a3774
68s         Normal    Pulled              pod/app-0813-151814-k3s-orphan-969f775b5-pcd4d     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      68s          1       app-0813-151814-k3s-orphan-969f775b5-pcd4d.162aed44721c476c
68s         Normal    Created             pod/app-0813-151814-k3s-orphan-969f775b5-nqhmz     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   68s          1       app-0813-151814-k3s-orphan-969f775b5-nqhmz.162aed44743ef88c
68s         Normal    Created             pod/app-0813-151814-k3s-orphan-969f775b5-pcd4d     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   68s          1       app-0813-151814-k3s-orphan-969f775b5-pcd4d.162aed4475378ed4
68s         Normal    Started             pod/app-0813-151814-k3s-orphan-969f775b5-nqhmz     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   68s          1       app-0813-151814-k3s-orphan-969f775b5-nqhmz.162aed4478743a48
68s         Normal    Started             pod/app-0813-151814-k3s-orphan-969f775b5-pcd4d     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   68s          1       app-0813-151814-k3s-orphan-969f775b5-pcd4d.162aed4479645e74
63s         Normal    ScalingReplicaSet   deployment/app-0813-151814-k3s-orphan                                            deployment-controller              Scaled down replica set app-0813-151814-k3s-orphan-969f775b5 to 2                                                                                              63s          1       app-0813-151814-k3s-orphan.162aed45c872b2f8
63s         Normal    SuccessfulDelete    replicaset/app-0813-151814-k3s-orphan-969f775b5                                  replicaset-controller              Deleted pod: app-0813-151814-k3s-orphan-969f775b5-pcd4d                                                                                                        63s          1       app-0813-151814-k3s-orphan-969f775b5.162aed45c8bafcd4
63s         Normal    Killing             pod/app-0813-151814-k3s-orphan-969f775b5-pcd4d     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  63s          1       app-0813-151814-k3s-orphan-969f775b5-pcd4d.162aed45c8cf63e0
60s         Normal    Killing             pod/app-0813-151814-k3s-orphan-969f775b5-f9mn9     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  60s          1       app-0813-151814-k3s-orphan-969f775b5-f9mn9.162aed4678d960d8
60s         Normal    Killing             pod/app-0813-151814-k3s-orphan-969f775b5-nqhmz     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  60s          1       app-0813-151814-k3s-orphan-969f775b5-nqhmz.162aed4678ebe2bc
59s         Normal    ScalingReplicaSet   deployment/app-0813-151824-k3s-orphan                                            deployment-controller              Scaled up replica set app-0813-151824-k3s-orphan-6cb6849d8 to 3                                                                                                59s          1       app-0813-151824-k3s-orphan.162aed4689504684
59s         Normal    SuccessfulCreate    replicaset/app-0813-151824-k3s-orphan-6cb6849d8                                  replicaset-controller              Created pod: app-0813-151824-k3s-orphan-6cb6849d8-ddwzb                                                                                                        59s          1       app-0813-151824-k3s-orphan-6cb6849d8.162aed46899cca04
<unknown>   Normal    Scheduled           pod/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb                                   default-scheduler                  Successfully assigned default/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb to k3d-k3s-orphan-server-0                                                            <unknown>    0       app-0813-151824-k3s-orphan-6cb6849d8-ddwzb.162aed468a0de838
59s         Normal    SuccessfulCreate    replicaset/app-0813-151824-k3s-orphan-6cb6849d8                                  replicaset-controller              Created pod: app-0813-151824-k3s-orphan-6cb6849d8-t5z9h                                                                                                        59s          1       app-0813-151824-k3s-orphan-6cb6849d8.162aed468a2b1110
59s         Normal    SuccessfulCreate    replicaset/app-0813-151824-k3s-orphan-6cb6849d8                                  replicaset-controller              Created pod: app-0813-151824-k3s-orphan-6cb6849d8-rz9wb                                                                                                        59s          1       app-0813-151824-k3s-orphan-6cb6849d8.162aed468a3ec598
<unknown>   Normal    Scheduled           pod/app-0813-151824-k3s-orphan-6cb6849d8-rz9wb                                   default-scheduler                  Successfully assigned default/app-0813-151824-k3s-orphan-6cb6849d8-rz9wb to k3d-k3s-orphan-server-0                                                            <unknown>    0       app-0813-151824-k3s-orphan-6cb6849d8-rz9wb.162aed468b037398
<unknown>   Normal    Scheduled           pod/app-0813-151824-k3s-orphan-6cb6849d8-t5z9h                                   default-scheduler                  Successfully assigned default/app-0813-151824-k3s-orphan-6cb6849d8-t5z9h to k3d-k3s-orphan-server-0                                                            <unknown>    0       app-0813-151824-k3s-orphan-6cb6849d8-t5z9h.162aed468b026674
59s         Normal    Pulled              pod/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      59s          1       app-0813-151824-k3s-orphan-6cb6849d8-ddwzb.162aed46b519b5c0
59s         Normal    Created             pod/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   59s          1       app-0813-151824-k3s-orphan-6cb6849d8-ddwzb.162aed46b7223694
59s         Normal    Started             pod/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   59s          1       app-0813-151824-k3s-orphan-6cb6849d8-ddwzb.162aed46bcfd7664
59s         Normal    Pulled              pod/app-0813-151824-k3s-orphan-6cb6849d8-t5z9h     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      59s          1       app-0813-151824-k3s-orphan-6cb6849d8-t5z9h.162aed46bfddd360
59s         Normal    Pulled              pod/app-0813-151824-k3s-orphan-6cb6849d8-rz9wb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Container image "nginx:1.16.0" already present on machine                                                                                                      59s          1       app-0813-151824-k3s-orphan-6cb6849d8-rz9wb.162aed46c0d29148
58s         Normal    Created             pod/app-0813-151824-k3s-orphan-6cb6849d8-t5z9h     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   58s          1       app-0813-151824-k3s-orphan-6cb6849d8-t5z9h.162aed46c2c5bfe8
58s         Normal    Created             pod/app-0813-151824-k3s-orphan-6cb6849d8-rz9wb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Created container k3s-orphan                                                                                                                                   58s          1       app-0813-151824-k3s-orphan-6cb6849d8-rz9wb.162aed46c369d4fc
58s         Normal    Started             pod/app-0813-151824-k3s-orphan-6cb6849d8-t5z9h     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   58s          1       app-0813-151824-k3s-orphan-6cb6849d8-t5z9h.162aed46c76494fc
58s         Normal    Started             pod/app-0813-151824-k3s-orphan-6cb6849d8-rz9wb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Started container k3s-orphan                                                                                                                                   58s          1       app-0813-151824-k3s-orphan-6cb6849d8-rz9wb.162aed46c7ff1d88
49s         Normal    ScalingReplicaSet   deployment/app-0813-151824-k3s-orphan                                            deployment-controller              Scaled down replica set app-0813-151824-k3s-orphan-6cb6849d8 to 2                                                                                              49s          1       app-0813-151824-k3s-orphan.162aed4902bb104c
49s         Normal    SuccessfulDelete    replicaset/app-0813-151824-k3s-orphan-6cb6849d8                                  replicaset-controller              Deleted pod: app-0813-151824-k3s-orphan-6cb6849d8-t5z9h                                                                                                        49s          1       app-0813-151824-k3s-orphan-6cb6849d8.162aed490311d3a0
49s         Normal    Killing             pod/app-0813-151824-k3s-orphan-6cb6849d8-t5z9h     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  49s          1       app-0813-151824-k3s-orphan-6cb6849d8-t5z9h.162aed49032cd2f4
46s         Normal    Killing             pod/app-0813-151824-k3s-orphan-6cb6849d8-rz9wb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  46s          1       app-0813-151824-k3s-orphan-6cb6849d8-rz9wb.162aed499d9b2930
46s         Normal    Killing             pod/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Stopping container k3s-orphan                                                                                                                                  46s          1       app-0813-151824-k3s-orphan-6cb6849d8-ddwzb.162aed499dafaa04
45s         Warning   Unhealthy           pod/app-0813-151824-k3s-orphan-6cb6849d8-ddwzb     spec.containers{k3s-orphan}   kubelet, k3d-k3s-orphan-server-0   Readiness probe failed: Get http://10.42.0.18:80/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)    45s          1       app-0813-151824-k3s-orphan-6cb6849d8-ddwzb.162aed4a01455168

@rez0n
Copy link

rez0n commented Aug 14, 2020

If some contributor wants to see that behavior, I can provide access to the affected cluster for some experiments.
FYI @brandond

@davidnuzik davidnuzik added the kind/bug Something isn't working label Aug 14, 2020
@davidnuzik davidnuzik added this to the v1.19 - Backlog milestone Aug 14, 2020
@jonstelly
Copy link

I can't give direct access but am happy to run any commands or one-off builds to diagnose.

This cluster has also been through a few upgrades. I had to do a cluster rebuild at some point but I think this current cluster started off as a 1.17 k3s release that got updated to 1.18.

@cubic3d
Copy link

cubic3d commented Aug 17, 2020

I'm running latest freshly installed K3OS 0.11.0 single node. After installing a chart, I deleted the HelmChart resource - it didn't remove the pods. Currently I have another HelmChart stuck that I am not able to delete - not with --now not with grace and force.

@davidnuzik
Copy link
Contributor

#2074 may be related. Linking the two.

@rancher-max
Copy link
Contributor

I haven't been able to reproduce this with newly created clusters or upgraded clusters from v1.17.x to v1.18.x. I am using rancher v2.5.1. Has anyone else who has seen this issue been able to recreate it on a fresh k3s setup? Either when doing steps through Rancher UI or directly through Helm? If so, please provide your steps and configs used. It looks like this only happens on older setups, which leads me to believe something else has caused the cluster to degrade and this issue probably isn't strictly for rancher apps but would happen for other deployments and replicasets, possibly when created via helm.

Most recently I attempted the following steps:

  1. Create a v1.17.7 k3s cluster
  2. Import into rancher
  3. Deploy wordpress app v7.3.8 through Rancher UI
  4. Upgrade cluster through Rancher UI to v1.18.9
  5. Upgrade wordpress app through Rancher UI to v9.0.3
  6. Delete wordpress app through Rancher UI

All resources were deleted successfully. See commands below.

# Before Cluster Upgrade
$ k get all -n wordpress
NAME                             READY   STATUS    RESTARTS   AGE
pod/svclb-wordpress-28qnw        0/2     Pending   0          12m
pod/wordpress-mariadb-0          1/1     Running   0          12m
pod/wordpress-579b5866db-xknl9   1/1     Running   0          12m

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/wordpress-mariadb   ClusterIP      10.43.13.69     <none>        3306/TCP                     12m
service/wordpress           LoadBalancer   10.43.251.102   <pending>     80:32475/TCP,443:31805/TCP   12m

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-wordpress   1         1         0       1            0           <none>          12m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress   1/1     1            1           12m

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-579b5866db   1         1         1       12m

NAME                                 READY   AGE
statefulset.apps/wordpress-mariadb   1/1     12m

# After Cluster Upgrade
$ k get all -n wordpress
NAME                             READY   STATUS    RESTARTS   AGE
pod/wordpress-579b5866db-xknl9   1/1     Running   0          15m
pod/svclb-wordpress-9j6dz        0/2     Pending   0          2m
pod/wordpress-mariadb-0          1/1     Running   0          15m

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/wordpress-mariadb   ClusterIP      10.43.13.69     <none>        3306/TCP                     15m
service/wordpress           LoadBalancer   10.43.251.102   <pending>     80:32475/TCP,443:31805/TCP   15m

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-wordpress   1         1         0       1            0           <none>          15m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress   1/1     1            1           15m

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-579b5866db   1         1         1       15m

NAME                                 READY   AGE
statefulset.apps/wordpress-mariadb   1/1     15m

# After app upgrade
$ k get all -n wordpress
NAME                             READY   STATUS    RESTARTS   AGE
pod/wordpress-579b5866db-xknl9   1/1     Running   0          16m
pod/svclb-wordpress-9j6dz        0/2     Pending   0          2m55s
pod/wordpress-mariadb-0          0/1     Running   0          24s

NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/wordpress-mariadb   ClusterIP      10.43.13.69     <none>        3306/TCP                     16m
service/wordpress           LoadBalancer   10.43.251.102   <pending>     80:32475/TCP,443:31805/TCP   16m

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-wordpress   1         1         0       1            0           <none>          16m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress   1/1     1            1           16m

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-579b5866db   1         1         1       16m

NAME                                 READY   AGE
statefulset.apps/wordpress-mariadb   0/1     16m

# After delete
$ k get all -n wordpress
No resources found in wordpress namespace.

@rancher-max
Copy link
Contributor

I'd like to be able to reproduce this and get the issue fixed. With that said... @cubic3d @theAkito @jonstelly Do any of you know a reliable way to reproduce this on a fresh setup?

@davidnuzik
Copy link
Contributor

Hi @cubic3d @theAkito @jonstelly as Max mentioned we're having a hard time reproducing this. Might we be missing something? Max has tried hard to reproduce but we're not quite sure what else to try. Any help you might be able to offer would be very much appreciated. We don't want to just close this out as unable to reproduce if we can help it.
Maye there is something we are missing, but we reviewed the issue and comments carefully and still can't reproduce.

@cubic3d
Copy link

cubic3d commented Nov 3, 2020

Hey you two, thank you for your work and help. Unfortunately I don't have access to the environment anymore. I haven't been able to reproduce it myself, it appeared randomly.

@rez0n
Copy link

rez0n commented Nov 3, 2020

I have a few notes.
I haven't used Rancher UI
I haven't upgraded the cluster until the problem was appear
A just thing which I change in the cluster - I added certbot help chart.
Also, I still able to provide direct access to my cluster for collaborators for the tests.

@cubic3d
Copy link

cubic3d commented Nov 3, 2020

I have a few notes.
I haven't used Rancher UI
I haven't upgraded the cluster until the problem was appear
A just thing which I change in the cluster - I added certbot help chart.
Also, I still able to provide direct access to my cluster for collaborators for the tests.

The certbot chart has some nested resources that had problems with previous versions of helm and kubectl which led to freezes of the apply and delete actions. Can't look for the issue now, but sounds that may be the cause for undeleted resources after crds are initially nstalled.

@jonstelly
Copy link

My cluster seems to have blown up...

starting kubernetes: preparing server: start cluster and https: raft_start(): io: load closed segment 0000000173875511-0000000173875606: found 95 entries (expected 96)

Not sure if I'll be able to get it back up and running or need to rebuild. Will let you know if I get it back up and running or if I see the issue after recreating.

@rancher-max
Copy link
Contributor

@jonstelly That error looks like #1403 -- do you think that could be the case for you here?

@jonstelly
Copy link

Yeah, that looks like it. I'll rebuild the cluster and report back here if I see the same pod lingering behavior. Might take me a couple days to get it built back up to that point.

@davidnuzik
Copy link
Contributor

We will stand by a while longer for input such as from @jonstelly
However, it sounds like most folks are not seeing the issue anymore. Some other issues have been mentioned and we should open separate issues for other problems as needed.

@davidnuzik
Copy link
Contributor

Hi. As per my prior comment, it seems like we are having a tough time reproducing this now especially with newer versions. For any comments indicating issues not directly related to what this issue captured in it's summary, please open separate GitHub issues.

I'd like to close this out. If there are any serious objections please comment. We'll need to then work together to try to reproduce this and identify a root cause.

@brandond
Copy link
Member

Since custom resources are involved, this may be related: kubernetes/kubernetes#92743

@jonstelly
Copy link

I've had my cluster up for a couple weeks now and haven't seen the issue in the new cluster. Thanks for the work trying to track this one down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants