-
Notifications
You must be signed in to change notification settings - Fork 540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update install doc to match release v0.19.8 #163
Update install doc to match release v0.19.8 #163
Conversation
/assign @cwdsuzhou @denkensk |
doc/install.md
Outdated
scheduleTimeoutSeconds: 10 | ||
``` | ||
|
||
<< UNSOLVED >>: It's expected to get `status` in the above yaml. Needs debug. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cwdsuzhou @denkensk could you help investigate why the status is not displayed? Better to compare the RBAC/CRD with your production settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After appending the 2nd commit, a PodGroup with matching Pods can display its status
, but still empty for a PodGroup without any matching Pods.
@@ -0,0 +1,83 @@ | |||
# First part |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cwdsuzhou @denkensk could you help compare the RBAC/CRD with your production settings? to ensure the correctness and the privileges are not over-exposed. Thanks in advance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold |
manifests/install/all-in-one.yaml
Outdated
resources: ["pods"] | ||
verbs: ["get", "list", "watch"] | ||
- apiGroups: ["scheduling.sigs.k8s.io"] | ||
resources: ["podgroups", "podgroups/status", "elasticquotas", "elasticquotas/status"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have not enabled status
subresource of podgroup, so podgroups/status
is not necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks. Could you help verify that when a PodGroup is created (without any matching pods), is its status displayed properly?
Updated: after I updated controller SA's rbac setting (replace Update with Patch), it shows properly. But it's not showing the number:
...
spec:
minMember: 3
scheduleTimeoutSeconds: 10
status:
phase: Pending
And if I create a deployment with 2 pods, the phase
gets changed to PreScheduling
, but if I delete the deployment, its phase stays in PreScheduling
(I'd expect it to be updated back to Pending
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if I create a deployment with 2 pods, the
phase
gets changed toPreScheduling
, but if I delete the deployment, its phase stays inPreScheduling
(I'd expect it to be updated back toPending
)
This may be a bug the status would not be back to Pending
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be a bug the status would not be back to Pending
I will create an issue tracking this.
BTW: could you help confirm this issue:
after I updated controller SA's rbac setting (replace Update with Patch), it shows properly. But it's not showing the number:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my test environment, as I showed #163 (comment), we can see number.
The controller is deployed based on the manifests in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created #166 to track the UX issues.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cwdsuzhou, Huang-Wei The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
79623ef
to
9bf85d9
Compare
Commits squashed and some UX issues are tracked in #166. Please take another look. /hold cancel |
@cwdsuzhou would you minding re-lgtm? |
doc/install.md
Outdated
[install doc](https://github.com/kubernetes-sigs/scheduler-plugins/blob/release-1.19/doc/install.md) in | ||
branch `release-1.19`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[install doc](https://github.com/kubernetes-sigs/scheduler-plugins/blob/release-1.19/doc/install.md) in | |
branch `release-1.19`. | |
[install doc](https://github.com/kubernetes-sigs/scheduler-plugins/blob/release-1.18/doc/install.md) in | |
branch `release-1.18`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the catch. I put this paragraph into a section Instal lower-version releases
.
/kind documentation |
9bf85d9
to
f6ed233
Compare
/lgtm Adding a hold so that @cwdsuzhou can review one more time before merge. Please feel free to remove the hold once you are ready for this to merge. Thanks! |
Kindly ping @@cwdsuzhou for taking another look. |
``` | ||
|
||
1. Create a deploy labelled `pod-group.scheduling.sigs.k8s.io: pg1` to associated with PodGroup | ||
`pg1` created in the previous step. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
always get 1.
, seems not intentional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the list is long and frequently modified, it's not easy to always make its numbers ordered accurately :) So keep all as 1.
and make them indented well, markdown will do the rest things.
minor hits. Otherwise, lgtm |
/hold cancel |
Part of #161.
Updated the installation doc to match release v0.19.8.
Fixes #150.