-
Notifications
You must be signed in to change notification settings - Fork 82
Next generation MT Scheduler and Descheduler with pluggable policies #768
Next generation MT Scheduler and Descheduler with pluggable policies #768
Conversation
7bcb222
to
143d240
Compare
/test pull-knative-sandbox-eventing-kafka-unit-tests |
1a409ed
to
ad47180
Compare
Codecov Report
@@ Coverage Diff @@
## main #768 +/- ##
==========================================
+ Coverage 74.64% 74.75% +0.11%
==========================================
Files 143 154 +11
Lines 6633 7139 +506
==========================================
+ Hits 4951 5337 +386
- Misses 1435 1518 +83
- Partials 247 284 +37
Continue to review full report at Codecov.
|
3d0817b
to
b2bce5c
Compare
eb05e8e
to
2798557
Compare
zoneMap := make(map[string]struct{}) | ||
for i := 0; i < len(nodes); i++ { | ||
node := nodes[i] | ||
if node.Spec.Unschedulable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this line requires some explanation. In theory the vpod scheduler shouldn't have to deal with nodes directly. A pod on an unschedulable node might still be viable, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thin line I feel. A node is marked as Unschedulable in its Spec after it has been optionally drained and cordoned. K8 scheduler won't place new pods onto that Node, so idea was that we should not place "new vreplicas" onto this pod either. Because the next step for this Node might be rebooting/shutting down/upgrading etc. So this is a preventative step to add more rebalancing work. That was my thinking..
} | ||
|
||
nodeName := pod.Spec.NodeName | ||
node, err := s.nodeLister.Get(nodeName) //Node could be marked as Unschedulable - CANNOT SCHEDULE VREPS on a pod running on this node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here. Maybe we just don't care about pod eviction. This has to be dealt somewhere else anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each plugin must be given some info to prevent it from selecting a pod that is in the process of eviction. Not sure yet of how to do that without the state collector telling it so...will think about this a bit more.
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
Signed-off-by: aavarghese <avarghese@us.ibm.com>
e7fffb7
to
29c0181
Compare
Signed-off-by: aavarghese <avarghese@us.ibm.com>
29c0181
to
4f83ec9
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aavarghese, lionelvillard The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Tracking optimizations remaining to do:
/cc @lionelvillard |
Fixes #593
🎁 Implementing next generation MT scheduler and descheduler for Knative Eventing sources with plugins for filtering pods
and scoring them to pick the best placement pod.
Proposed Changes
Release Note