Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podStatusPhases support containercreating status #580

Closed
lovejoy opened this issue May 28, 2021 · 15 comments · Fixed by #834
Closed

podStatusPhases support containercreating status #580

lovejoy opened this issue May 28, 2021 · 15 comments · Fixed by #834
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@lovejoy
Copy link

lovejoy commented May 28, 2021

Is your feature request related to a problem? Please describe.

some network issue or cni issue may cause pods always containercreating
sth like #62
Describe the solution you'd like

podStatusPhases now only support Running and Pending should add some other status

@lovejoy lovejoy added the kind/feature Categorizes issue or PR as related to a new feature. label May 28, 2021
@damemi
Copy link
Contributor

damemi commented Jun 1, 2021

ContainerCreating isn't a valid pod status phase, which is what we check for: https://github.com/kubernetes/api/blob/v0.21.1/core/v1/types.go#L2506-L2526. I'm not sure how kubectl determines the ContainerCreating status but we could look into what that technically is so that we can check for it

@damemi
Copy link
Contributor

damemi commented Jun 3, 2021

Thanks @lovejoy
So it looks like to add this, we would need to add logic to do the following:

for each pod
  for each container
    if containerStatus.State.Waiting != nil && containerStatus.State.Waiting.Reason == "ContainerCreating"
      evict this pod

We could add this logic into where we currently check status phases.

@bytetwin
Copy link
Contributor

If a pod has multiple containers and only one of them is stuck in ContainerCreating state, how should we deal with the eviction ? The pod may be unusable with the stuck container, so does it make sense to evict ?

@damemi
Copy link
Contributor

damemi commented Jun 16, 2021

That is a good question. I don't think we will be able to accurately answer it without gathering more user feedback, though it sounds like from this request if any container is ContainerCreating, that will trigger the kubectl output status that @lovejoy is requesting to act on

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 14, 2021
@seanmalloy
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 1, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2021
@a7i
Copy link
Contributor

a7i commented Jan 21, 2022

Thanks @lovejoy So it looks like to add this, we would need to add logic to do the following:

for each pod
  for each container
    if containerStatus.State.Waiting != nil && containerStatus.State.Waiting.Reason == "ContainerCreating"
      evict this pod

We could add this logic into where we currently check status phases.

This would also help with CrashLoopBackoff containers. i.e. containerStatus.State.Waiting.Reason == "CrashLoopBackoff". We are moving off of an in-house tool so I would be open to proposing a PR in the next few days. Any ideas on what the schema should look like?

  "PodLifeTime":
     enabled: true
     params:
       podLifeTime:
         maxPodLifeTimeSeconds: 86400
         reasons:
         - "CrashLoopBackoff"
         - "ContainerWaiting"
         podStatusPhases:
         - "Pending"
         - "Running"

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 20, 2022
@damemi
Copy link
Contributor

damemi commented Mar 2, 2022

/remove-lifecycle rotten

@a7i sorry, this slipped for me... I think it might be confusing for some users to have phases and reasons as separate fields. We've seen a couple reports of people asking why podStatusPhases doesn't support a specific phase, when it's not actually a phase (similar to this one)

It might be easier for people if we just lump everything into a single field, so they don't need to know the technical details of what the status they see from kubectl is. Do you think there's any potential problems with that?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2022
@a7i
Copy link
Contributor

a7i commented May 31, 2022

/remove-lifecycle rotten

@a7i
Copy link
Contributor

a7i commented May 31, 2022

It might be easier for people if we just lump everything into a single field, so they don't need to know the technical details of what the status they see from kubectl is

@damemi I do not see any issues with that. Will work on a PR soon-ish.

@a7i
Copy link
Contributor

a7i commented May 31, 2022

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants