Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 2 bugs in the OCI integration #7854

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

eric-higgins-ai
Copy link

What type of PR is this?

/kind bug

What this PR does / why we need it:

This PR fixes 2 bugs with the Oracle Cloud integration:

  • When fetching information about node shapes to determine if a node pool with 0 nodes can be scaled up, it only fetches the first 100 results. The code specifies Limit: 500, but the max value of the limit on the backend is 100. This prevents nodes not on the first page of results from scaling up from 0.
  • It's possible when attempting to scale up that the request fails due to a lack of capacity in the region. In this case OCI will show a node in the pool, but won't assign an ID to the node. This means that any attempts to scale down the node fail. To fix this, we just don't tell the autoscaler about nodes that have no ID.

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. labels Feb 22, 2025
Copy link

linux-foundation-easycla bot commented Feb 22, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: eric-higgins-ai
Once this PR has been reviewed and has the lgtm label, please assign jlamillan for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. area/provider/oci Issues or PRs related to oci provider needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 22, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @eric-higgins-ai. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 22, 2025
@eric-higgins-ai eric-higgins-ai marked this pull request as ready for review February 24, 2025 19:34
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 24, 2025
@k8s-ci-robot k8s-ci-robot requested a review from x13n February 24, 2025 19:34
@jlamillan
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 27, 2025
@jlamillan
Copy link
Contributor

jlamillan commented Feb 27, 2025

The pagination fix for listing node shapes makes sense to me and is straightforward.

We took a different approach to the capacity failure in the Instance Pool implementation of the OCI provider. We monitor the work request queue for errors relating to capacity/quota, and fail any on-going scale-up operation when they are detected. Additionally, we create "placeholder instances" for yet-to-be-fulfilled nodes, which allows the Cluster Autoscaler to short-circuit the --max-node-provision-time timeout giving it (unfulfilled) instances to delete. This approach frees the Cluster Autoscaler to (1) detect the failure fast and (2) try to scale a different node-group to fulfill the missing instances (provided it meets the node requirements).

@trungng92 what do you think?

@eric-higgins-ai
Copy link
Author

@jlamillan It seems like the scale up operation is async in the node pool implementation, unlike the instance pool, so I think it's not quite as straightforward to fast-fail the scale up.

My approach does have a limitation that we'll continue trying to provision the node forever if the node pool is scaling up from 0 nodes though. Specifically, it seems like this case is supposed to be handled by DecreaseTargetSize, but it never gets called because the node group has no readiness information (since it has no nodes), and so we don't register that the node group has an incorrect size. To me this feels like a bug with updateReadinessStats, because I think DecreaseTargetSize would work as long as the node pool has at least 1 provisioned node. My preference would be to fix that instead, if that makes sense to you as well.

@jlamillan
Copy link
Contributor

@eric-higgins-ai thanks for looking into that and thanks for the PR.

I'm comfortable approving the pagination fix. However, we need some feedback from someone from the OKE team about the failed scale change - preferably @trungng92.

If it'd be helpful to you to have the first fix merged in quickly, you can separate the fixes into two different PRs.

@trungng92
Copy link
Contributor

trungng92 commented Mar 3, 2025

I agree that the pagination looks good as is

As for this change:

This means that any attempts to scale down the node fail. To fix this, we just don't tell the autoscaler about nodes that have no ID.

Two options come off the top of my head. If an upcoming instance doesn't have an instance ocid:

  1. We can not include it in the list of nodes to perform actions on (as the current pull request does)
  2. Or, if the autoscaler tries to scale down a node/instance with no instance ocid, then we can ignore/warn on that during the delete node call.

I am slightly in favor of the second option. I prefer that the cluster autoscaler stores the information about the upcoming and then we can choose whether or not to act on that information. Perhaps somewhere in the DeleteNode functionality.

@gvnc
Copy link
Contributor

gvnc commented Mar 3, 2025

@trungng92 we have avoided making delete api calls if node doesn't have an instance id within this PR
We also enabled extra logs for the users to be aware of the issue like why delete operation fails and we provide a node name as it doesn't hold an ocid.

klog.Errorf("Node %s doesn't have an instance id so it can't be deleted.", nodeName)

klog.Errorf("This could be due to a Compute Instance issue in OCI such as Out Of Host Capacity error. Check the instance status on OCI Console.")

@eric-higgins-ai
Copy link
Author

If it'd be helpful to you to have the first fix merged in quickly, you can separate the fixes into two different PRs.

@jlamillan it doesn't matter too much to us, we're already using a forked version of cluster autoscaler with these fixes and just want to make this change so we can eventually go back to the OCI-managed cluster autoscaler.

@gvnc I don't think that PR actually fixes the underlying issue. It makes DeleteNodes error in a nicer way, but it does still error. This means cluster autoscaler exits its reconciliation loop early and won't autoscale any node pools that would be checked after the erroring one in the loop. I think if we were to change this line to just return nil then that would fix the issue too - I'm down to update this PR to do that if y'all prefer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/oci Issues or PRs related to oci provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants