Skip to content
This repository has been archived by the owner on Sep 12, 2023. It is now read-only.

minor errors in pod.go #100

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

xfate123
Copy link

@xfate123 xfate123 commented Aug 4, 2020

just some minor errors I found when I studied the code

@kubeflow-bot
Copy link

This change is Reviewable

@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign richardsliu
You can assign the PR to them by writing /assign @richardsliu in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -24,7 +24,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is unnecessary

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is created by gofmt I think, maybe we can keep it.

@@ -392,9 +392,9 @@ func (jc *JobController) ReconcilePods(
}
// Check if the pod is retryable.
if spec.RestartPolicy == apiv1.RestartPolicyExitCode {
if pod.Status.Phase == v1.PodFailed && trainutil.IsRetryableExitCode(exitCode) {
if pod.Status.Phase == v1.PodFailed && !trainutil.IsRetryableExitCode(exitCode) {
Copy link
Member

@terrytangyuan terrytangyuan Aug 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was intended and correct but we probably need to improve the log message here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original logic here is to provide flexibility to define your own retryable exit code. This should not be changed.

Since kubernetes doesn't have restart method, That's the reason we delete the pod, and wait for next reconcile loop to create a new one. It's asynchronous so restart here may be confused to users.

@@ -86,7 +86,7 @@ func (jc *JobController) AddPod(obj interface{}) {
logger := commonutil.LoggerForPod(pod, jc.Controller.GetAPIGroupVersionKind().Kind)

if job == nil {
if pod.Labels[apiv1.GroupNameLabel] == jc.Controller.GetGroupNameLabelValue() {
if pod.Labels[apiv1.GroupNameLabel] != jc.Controller.GetGroupNameLabelValue() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch up

Copy link
Member

@Jeffwan Jeffwan Aug 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xfate123

Please check logics here.

func (jc *JobController) resolveControllerRef(namespace string, controllerRef *metav1.OwnerReference) metav1.Object {
// We can't look up by UID, so look up by Name and then verify UID.
// Don't even try to look up by Name if it's the wrong Kind.
if controllerRef.Kind != jc.Controller.GetAPIGroupVersionKind().Kind {
return nil
}
job, err := jc.Controller.GetJobFromInformerCache(namespace, controllerRef.Name)
if err != nil {
return nil
}
if job.GetUID() != controllerRef.UID {
// The controller we found with this Name is not the same one that the
// ControllerRef points to.
return nil
}
return job

If we can not find the job, we will end reconcile loop and directly return. The only thing matters is if we want to have some meaningful log.

There're several reason job == nil here

  1. Kind doesn't match
  2. GroupNameLabel doesn't match
  3. Job doesn't exist. In this case, the pod is an orphan pod.

the 3rd one is the only case we want to persist the log. Does it make sense?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jeffwan
Thank you so much Jeff. Much clearer after your patient explanation.
But I am still a little confused
I think the three cases should be:
1.Kind doesn't match
2.Name unmatched
3. Name matched but UID unmatched
My understanding is that third case cannot prove this pod is an orphan pod. It can only prove that the controllerRef point s to a different job.
Thank you again. Really appreciate your help
Thank you

Copy link
Member

@Jeffwan Jeffwan Aug 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xfate123 You are right. I didn't follow logic in resolveControllerRef. UID mismatch is one of the cases. We check GroupNameLabel again in caller side which I think is unnecessary. We use fixed value kubeflow.org for most of the operators.

My understanding is that third case cannot prove this pod is an orphan pod. It can only prove that the controllerRef point s to a different job.

I may not explain this clearly. you are right. It could be an orphan pod (job has been deleted - case 2) or a pod point to different job (UID mismatch - case 3). case 1 seems match the criterion as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jeffwan Thank you so much for your clear explanation

@@ -392,9 +392,9 @@ func (jc *JobController) ReconcilePods(
}
// Check if the pod is retryable.
if spec.RestartPolicy == apiv1.RestartPolicyExitCode {
if pod.Status.Phase == v1.PodFailed && trainutil.IsRetryableExitCode(exitCode) {
if pod.Status.Phase == v1.PodFailed && !trainutil.IsRetryableExitCode(exitCode) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original logic here is to provide flexibility to define your own retryable exit code. This should not be changed.

Since kubernetes doesn't have restart method, That's the reason we delete the pod, and wait for next reconcile loop to create a new one. It's asynchronous so restart here may be confused to users.

@@ -86,7 +86,7 @@ func (jc *JobController) AddPod(obj interface{}) {
logger := commonutil.LoggerForPod(pod, jc.Controller.GetAPIGroupVersionKind().Kind)

if job == nil {
if pod.Labels[apiv1.GroupNameLabel] == jc.Controller.GetGroupNameLabelValue() {
if pod.Labels[apiv1.GroupNameLabel] != jc.Controller.GetGroupNameLabelValue() {
Copy link
Member

@Jeffwan Jeffwan Aug 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xfate123

Please check logics here.

func (jc *JobController) resolveControllerRef(namespace string, controllerRef *metav1.OwnerReference) metav1.Object {
// We can't look up by UID, so look up by Name and then verify UID.
// Don't even try to look up by Name if it's the wrong Kind.
if controllerRef.Kind != jc.Controller.GetAPIGroupVersionKind().Kind {
return nil
}
job, err := jc.Controller.GetJobFromInformerCache(namespace, controllerRef.Name)
if err != nil {
return nil
}
if job.GetUID() != controllerRef.UID {
// The controller we found with this Name is not the same one that the
// ControllerRef points to.
return nil
}
return job

If we can not find the job, we will end reconcile loop and directly return. The only thing matters is if we want to have some meaningful log.

There're several reason job == nil here

  1. Kind doesn't match
  2. GroupNameLabel doesn't match
  3. Job doesn't exist. In this case, the pod is an orphan pod.

the 3rd one is the only case we want to persist the log. Does it make sense?

georgkaleido pushed a commit to georgkaleido/common that referenced this pull request Jun 9, 2022
Co-authored-by: depfu[bot] <23717796+depfu[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants