-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Populate machineset status #180
Conversation
Hi @k4leung4. Thanks for your PR. I'm waiting for a kubernetes or kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @jessicaochen |
ee39a39
to
b2ad52a
Compare
listers "sigs.k8s.io/cluster-api/pkg/client/listers_generated/cluster/v1alpha1" | ||
"sigs.k8s.io/cluster-api/pkg/controller/sharedinformers" | ||
) | ||
|
||
const ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this const belongs in the status file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved.
return updatedMS, nil | ||
} | ||
// Stop retrying if we exceed statusUpdateRetries - the machineSet will be requeued with a rate limit. | ||
if i >= statusUpdateRetries { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a retry test? ie. first time fails, second time works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
// Always updates status as machines come up or die. | ||
_, err = updateMachineSetStatus(c.clusterAPIClient.ClusterV1alpha1().MachineSets(machineSet.Namespace), machineSet, newStatus) | ||
if err != nil { | ||
return fmt.Errorf("failed to update machine set status, %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recommend replacing the "," with a ":" as it signifies the error is the reason for why we "failed to update". Also it matches the general style of errors in this project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed.
return c.syncReplicas(machineSet, filteredMachines) | ||
syncErr := c.syncReplicas(machineSet, filteredMachines) | ||
|
||
ms := machineSet.DeepCopy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two questions:
- Why is it OK for the code below to continue if syncErr contains an error?
- Given The clife_ prefix to the repo name is no longer required #1 above, we can reach line 85 with syncErr populated with an error value. Given In the case of an error on line 85, will cause us to reach line 87 why is it OK that we could potentially lose an error value in "syncErr".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason to continue is ensure the MachineSet Status that contains the number of replicas/labeled/ready/available is correct.
syncReplicas may have changed one of the counts in the status and it is worthwhile to update it.
You make a good point about possibly dropping the syncErr.
I changed it such that the two errors are combined.
return updatedMS, nil | ||
} | ||
// Stop retrying if we exceed statusUpdateRetries - the machineSet will be requeued with a rate limit. | ||
if i >= statusUpdateRetries { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this package depend on cluster-api/util? If so using the retry out of there would be simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can depend on cluster-api/util, though I'm not sure it makes it simpler.
The Retry there is more geared towards synchronization of external dependencies, where the exponential backoff makes sense.
I simply want to retry a request in case the server fails for whatever reason.
This PR was split off from PR kubernetes-sigs#165 to make it a smaller PR. There are edge cases that will be implemented in a separate PR.
/assign @spew @jessicaochen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
lgtm (leaving backslash for secondary reviewer)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jessicaochen, k4leung4 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
@spew did you have any other questions or concerns about this PR |
/cc @spew |
no other comments! lgtm. |
/lgtm |
…pdate-to-patch chore: switch from Update() to Patch()
…tes-sigs#180) * Add support to inject custom trusted certs into the machines Resolves kubernetes-sigs#165 Change-Id: I2f399143419553b120e28b46719e7514990fdab0 * update trustedCerts.md fix grammar
…ency-openshift-4.15-ose-cluster-api OCPBUGS-19109: Updating ose-cluster-api images to be consistent with ART
This PR was split off from PR #165 to make it a smaller PR.
There are edge cases that will be implemented in a separate PR.
@kubernetes/kube-deploy-reviewers