-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unify operators into one binary. #21
Unify operators into one binary. #21
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@markusthoemmes: 0 warnings.
In response to this:
Proposed Changes
At this point, there is very little reason to spawn both operators into their own pods. We can unify them into one binary and run them off of the same caches to reduce overhead by quite a bit.
Release Note
Unified the operator binaries into one.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Welcome @markusthoemmes! It looks like this is your first PR to knative-sandbox/operator 🎉 |
Hi @markusthoemmes. Thanks for your PR. I'm waiting for a knative-sandbox member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
config/role.yaml
Outdated
@@ -207,139 +208,4 @@ aggregationRule: | |||
# automatically. | |||
- matchExpressions: | |||
- {key: eventing.knative.dev/release, operator: Exists} | |||
rules: [] # Rules are automatically filled in by the controller manager. | |||
--- | |||
kind: ClusterRole |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you'll still need this Role (or something like it) in order for the operator to begin bootstrapping.
Did you find it was unnecessary? It would be great if we only needed the aggregated role, but I'm suspicious that we need more than that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original knative-serving-operator role looked like a superset of this. I honestly haven't really checked that in depth though, I mostly eyeballed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, nice. that's entirely possible.
One change we had to make recently was to allow create/delete/list/watch/update for Jobs, I think you may need to readd that (as without it our upgrade tests were failing)
/lgtm |
cmd/operator/main.go
Outdated
if err != nil { | ||
log.Fatal("Error building kubeconfig", err) | ||
} | ||
sharedmain.MainWithConfig(signals.NewContext(), "serving-operator", cfg, knativeserving.NewController) | ||
sharedmain.MainWithConfig(signals.NewContext(), "serving-operator", cfg, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we change the name into operator or knative-operator
@markusthoemmes Generally it looks good to me, except a naming issue. |
Good point. We can easily have the operator delete the old deployments when it starts up, similar to how the |
8465a2e
to
0919ef4
Compare
Sorry, I had to heavily rebase against the version changes. Squashed in that process to ease reviewing again. |
Oh nice... a case of "works on my machine" 😂. Gonna look at getting log output from our tests. |
1c54836
to
a74d2d3
Compare
@markusthoemmes go-coverage asks for unit test coverage of new added files. I am not sure how difficult/easy it can be added to upgrade.go to remove the deployments and service accounts. |
Should be straightforward, I got that on my todo list. |
a74d2d3
to
a89446f
Compare
The following is the coverage report on the affected files.
|
/assign @houshengbo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jcrossley3, markusthoemmes The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is accounted for upstream in f8fdf32
Proposed Changes
At this point, there is very little reason to spawn both operators into their own pods. We can unify them into one binary and run them off of the same caches to reduce overhead by quite a bit.
Release Note