Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow cluster-autoscaler to run on spot if nothing else is available #14593

Merged
merged 1 commit into from
Dec 6, 2022

Conversation

johngmyers
Copy link
Member

An alternative to #14591

Fixes #14411

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 18, 2022
@k8s-ci-robot k8s-ci-robot added area/addons size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 18, 2022
@olemarkus
Copy link
Member

This is not a solution for spot-only clusters, so I don't think it safely fixes #14411

@johngmyers johngmyers changed the title WIP Allow cluster-autoscaler to run on spot if nothing else is available Allow cluster-autoscaler to run on spot if nothing else is available Dec 6, 2022
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 6, 2022
@johngmyers
Copy link
Member Author

@olemarkus could you expand on why this isn't a solution for spot-only clusters?

Copy link
Member

@olemarkus olemarkus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For spot-only, the only really safe place to put CAS would be on the control plane. However, using NTH in addition would probably mitigate this to the extent it is a very unlikely scenario.

If you have many IGs with very narrow mix of insistence types, you can still run into this though, but CAS trying to scale up an IG that has no spot capacity, and then get evicted before it has the chance to jump to the next one.

But yeah, this all may be theoretical since we also run two CAS instances, and the likelyhood of both getting reaped is in itself fairly low.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 6, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: olemarkus

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 6, 2022
@k8s-ci-robot k8s-ci-robot merged commit 7ce93c0 into kubernetes:master Dec 6, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.26 milestone Dec 6, 2022
@johngmyers johngmyers deleted the cas-allow-spot branch December 6, 2022 17:54
@johngmyers
Copy link
Member Author

Cluster Autoscaler is system-cluster-critical, so there's a good chance of it rescheduling if it gets evicted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/addons cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Node affinity for cluster autoscaler
3 participants