-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-38543: OCP web console show pod status as Init:0/1 after using Native sidecars #14313
OCPBUGS-38543: OCP web console show pod status as Init:0/1 after using Native sidecars #14313
Conversation
@cyril-ui-developer: This pull request references Jira Issue OCPBUGS-38543, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest |
/jira refresh |
@cyril-ui-developer: This pull request references Jira Issue OCPBUGS-38543, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest |
@cyril-ui-developer: This pull request references Jira Issue OCPBUGS-38543, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
frontend/public/module/k8s/pods.ts
Outdated
@@ -248,7 +248,7 @@ export const podPhase = (pod: PodKind): PodPhase => { | |||
: `Init:ExitCode:${terminated.exitCode}`; | |||
} else if (waiting && waiting.reason && waiting.reason !== 'PodInitializing') { | |||
phase = `Init:${waiting.reason}`; | |||
} else { | |||
} else if (!running && !ready) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: After going through the docs linked above, I am still not sure how these conditions are derived, but adding this logic fixes the issue. It would be good to double-check this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cyril-ui-developer I think the root cause of the associated bug is that our podPhase
function has fallen out of sync with the kube printPod
function. The kube version referenced in our comment is 1.17 and kube is now at v1.31. That being said, we should be able to reuse part of the logic from that kube function to fix this bug (see my comments).
We should probably also open a follow-on story or bug to revise this whole podPhase
function and make sure it is calculating pod phase identically to the k8s 1.31 printPod
function.
frontend/public/module/k8s/pods.ts
Outdated
@@ -248,7 +248,7 @@ export const podPhase = (pod: PodKind): PodPhase => { | |||
: `Init:ExitCode:${terminated.exitCode}`; | |||
} else if (waiting && waiting.reason && waiting.reason !== 'PodInitializing') { | |||
phase = `Init:${waiting.reason}`; | |||
} else { | |||
} else if (!running && !ready) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this would cause the loop over init containers to be terminated after encountering the first sidecar, which is probably not what we want.
@@ -234,7 +234,7 @@ export const podPhase = (pod: PodKind): PodPhase => { | |||
let phase = pod.status.phase || pod.status.reason; | |||
|
|||
_.each(pod.status.initContainerStatuses, (container: ContainerStatus, i: number) => { | |||
const { terminated, waiting } = container.state; | |||
const { terminated, waiting, running, ready } = container.state; | |||
if (terminated && terminated.exitCode === 0) { | |||
return true; | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the k8s printPod func handles sidecars here and continues the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k8s printPod func
Oh something like this, right?
if ((running && ready)) { return true; }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheRealJon I think this might be the likely cause. This condition contradict the other condition to "Change pod status back to "Running" if there is at least one container
still reporting as "Running" status".. If the else
condition is removed that would fix the issue.
edbe571
to
f05c68b
Compare
f05c68b
to
019a2e6
Compare
frontend/public/module/k8s/pods.ts
Outdated
if (terminated && terminated.exitCode === 0) { | ||
return true; | ||
} | ||
|
||
if (running && container.started && initContainerSpec?.restartPolicy) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The restartPolicy
property allows Always
only. So I think it looks cleaner not to assign it. WDYT?
frontend/public/module/k8s/pods.ts
Outdated
if (terminated && terminated.exitCode === 0) { | ||
return true; | ||
} | ||
|
||
if (running && container.started && initContainerSpec?.restartPolicy) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For clarity, I think it's better to have the explicit equality check and copy the order of the boolean checks from the original k8s code. I don't think we need to check both 'container.started' and 'container.status.running'. The k8s code only checks 'container.started'.
if (running && container.started && initContainerSpec?.restartPolicy) { | |
if (initContainerSpec?.restartPolicy === 'Always' && container.started) { |
019a2e6
to
90bfbe1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cyril-ui-developer, TheRealJon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
@cyril-ui-developer: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@cyril-ui-developer: Jira Issue OCPBUGS-38543: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-38543 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/cherrypick release-4.18 |
@cyril-ui-developer: new pull request created: #14598 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cherrypick release-4.18 |
/cherrypick release-4.17 |
@cyril-ui-developer: new pull request could not be created: failed to create pull request against openshift/console#release-4.18 from head openshift-cherrypick-robot:cherry-pick-14313-to-release-4.18: status code 422 not one of [201], body: {"message":"Validation Failed","errors":[{"resource":"PullRequest","code":"custom","message":"No commits between openshift:release-4.18 and openshift-cherrypick-robot:cherry-pick-14313-to-release-4.18"}],"documentation_url":"https://docs.github.com/rest/pulls/pulls#create-a-pull-request","status":"422"} In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@cyril-ui-developer: new pull request created: #14616 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Before:
After: