-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1860035: Fix SubscriptionConfig NodeSelector field #1716
Bug 1860035: Fix SubscriptionConfig NodeSelector field #1716
Conversation
@awgreene: This pull request references Bugzilla bug 1860035, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just some nits
/lgtm
|
||
### NodeSelector | ||
|
||
The `nodeSelector` field defines a [NodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for the Pod created by OLM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is a bit vague. Could we say "for the pods associated with the operator deployment" instead?
This also future-proofs it a bit where in the future OLM supports more than just deploying operators, something we discussed recently
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is a bit vague. Could we say "for the pods associated with the operator deployment" instead?
Technically this config is applied to all deployments created by OLM.
This also future-proofs it a bit where in the future OLM supports more than just deploying operators, something we discussed recently
OLM also creates deployments for webhooks, which could be stored in the same deployment as the operator but isn't guaranteed, so I don't know how OLM could only create deployments for operators.
// | ||
// If any Container in PodSpec already defines a NodeSelector it will | ||
// be overwritten. | ||
func InjectNodeSelectorIntoDeployment(podSpec *corev1.PodSpec, nodeSelector map[string]string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this function is called InjectNodeSelectorIntoDeployment
but injects the nodeselector into a PodSpec
maybe calling it InjectNodeSelectorIntoPodSpec
makes it a little clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense, but none of the other methods in this file distinguish injecting into the deployment or podSpec. I suspect this was either an oversight when the code was written or assuming its fine given that the podspec is part of the deployment. Should we change the distinction in all places or follow convention.
This PR failed tests for 1 times with 3 individual failed tests and 4 skipped tests. A test is considered flaky if failed on multiple commits. totaltestcount: 1
|
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: awgreene, ecordell The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
This PR failed tests for 1 times with 6 individual failed tests and 4 skipped tests. A test is considered flaky if failed on multiple commits. totaltestcount: 1
|
/retest |
2 similar comments
/retest |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
11 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
18 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
fff853b
to
ee7eee2
Compare
The e2e test was failing because the CSV never reached the suceeded state. The pod associated with the operator was never assigned to a node that had the |
This PR failed 0 out of 1 times with 0 individual failed tests and 131 skipped tests. A test is considered flaky if failed on multiple commits. totaltestcount: 1
|
/retest |
Problem: OLM's Subscription CRD allows cluster admins to set the nodeSelector on Operators they deploy. Currently this API is exposed but performs no action. Solution: Wire the NodeSelector specified in the SubscriptionConfig into the pods deployed when installing the operator.
ee7eee2
to
72ee706
Compare
/lgtm |
@awgreene: All pull requests linked via external trackers have merged: operator-framework/operator-lifecycle-manager#1716. Bugzilla bug 1860035 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Problem: OLM's Subscription CRD allows cluster admins to set
nodeSelectors on Operators they deploy. Currently this API is exposed
but performs no action.
Solution: Wire the NodeSelector specified in the SubscriptionConfig into
the pods deployed when installing the operator.