-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2012069: Refactoring Status controller #498
Bug 2012069: Refactoring Status controller #498
Conversation
Skipping CI for Draft Pull Request. |
I think I would move related objects settings to separate function |
Some inspiration can be found in the cluster-monitoring-operator :-) |
LoL... I was doing something very similar. Thanks for the inspiration. |
/retest |
@@ -131,6 +131,9 @@ func (c *Controller) merge(clusterOperator *configv1.ClusterOperator) *configv1. | |||
clusterOperator = newClusterOperator(c.name, nil) | |||
} | |||
|
|||
// make sure to start a clean status controller | |||
c.ctrlStatus.reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tremes So, after a few tests (and fighting with the e2e tests), it seems that this is the best solution. We keep an instance to the status controller into the controller (controller everywhere) and then it encapsulates the logic to handle the statuses. But, it needs to be reset every time that the merge is executed, otherwise, the operator status would be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I am looking at it and it seem we are reseting everything (also the line https://github.com/openshift/insights-operator/pull/498/files#diff-10c33b0f428af9470aef2747e42fc97d97150d338b151e836d9cdebbf4fbef42R149) eveyrtime....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeh, but in this case it is correct. We are "resetting" 'cause the merge
method receives the clusterOperator
when it is called.
} | ||
type args struct { | ||
condition configv1.ClusterStatusConditionType | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this type is little bit redundant IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Elaborate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's not necessary to wrap condition configv1.ClusterStatusConditionType
to a new type. IOW you can use directly the configv1.ClusterStatusConditionType
IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see... I prefer to keep the args
and fields
wrapped. It makes the code more standardized test structures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fields
and why pls? I can't see any benefit TBH. It adds only unnecessary parts to the test IMO. Using conditions
you can omit the https://github.com/openshift/insights-operator/pull/498/files#diff-6cd7b045851c73c1120b823f3d07a763bfb7e20aa20e241ac5d3e27e862d57eaR119-R121 and just do tt.conditions.findCondition(tt.args.condition)
then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from the commend from @tremes about "The operator is healthy" being logged right after logging an error, I see nothing wrong. The code is alright and at least somewhat commented. I tested this version of IO locally and it ran just fine. Now giving a final approval yet to give others a chance to further review the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the fixes that have been made and a discussion with other team members, I believe this PR is now ready.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: natiiix, rluders The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
6 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
@rluders: This pull request references Bugzilla bug 2012069, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Bugzilla (dmisharo@redhat.com), skipping review request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@rluders: This pull request references Bugzilla bug 2012069, which is valid. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Bugzilla (dmisharo@redhat.com), skipping review request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
@rluders: All pull requests linked via external trackers have merged: Bugzilla bug 2012069 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This PR changes the
status.go
organizing the logic into smaller pieces making it a little bit better.Categories
Sample Archive
None
Documentation
None
Unit Tests
None
Privacy
Yes. There are no sensitive data in the newly collected information.
Changelog
Breaking Changes
No
References
https://issues.redhat.com/browse/CCXDEV-5350
https://bugzilla.redhat.com/show_bug.cgi?id=2012069
https://access.redhat.com/solutions/???