Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add known failure mode for Server Side Apply #4308

Closed
wants to merge 2 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions keps/sig-api-machinery/555-server-side-apply/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -506,6 +506,7 @@ _This section must be completed when targeting beta graduation to a release._

* **What are other known failure modes?**
For each of them, fill in the following information by copying the below template:
<!--
- [Failure mode brief description]
- Detection: How can it be detected via metrics? Stated another way:
how can an operator troubleshoot without logging into a master or worker node? Apply requests (`PATCH` with `application/apply-patch+yaml` mime type) have the same level of SLIs as other types of requests.
Expand All @@ -515,6 +516,19 @@ _This section must be completed when targeting beta graduation to a release._
levels that could help debug the issue? The feature uses very little logging, and errors should be returned directly to the user.
Not required until feature graduated to beta.
- Testing: Are there any tests for failure mode? Failure modes are tested exhaustively both as unit-tests and as integration tests.
-->
- SSA status updates fail for pods with duplicated env. names or container ports.
- Known bug at least since 1.26, fixed in 1.29 with in [Update sigs.k8s.io/structured-merge-diff to v4.4.0](https://github.com/kubernetes/kubernetes/pull/121575)
- Bugs:
- [Pod container ports and env-vars listMapKeys != validation](https://github.com/kubernetes/kubernetes/issues/113482)
- [Pod Garbage collector fails to clean up PODs from nodes that are not running anymore](https://github.com/kubernetes/kubernetes/issues/118261) (fixed in [#121103](https://github.com/kubernetes/kubernetes/pull/121103))
- Detection: The SSA request fails with 500. The response message is similar to the following:
`'failed to create manager for existing fields: failed to convert new object (app-b/app-b-5894548cb-7tssd; /v1, Kind=Pod) to smd typed: .spec.containers[name="app-b"].ports: duplicate entries for key [containerPort=8082,protocol="TCP"]'`.
- Mitigations: Make sure pods with duplicated keys for env. variables or
container pods are not created. Also, update the existing pods to cleanup
the problematic fields.
- Testing: [PodGC integration test](https://github.com/kubernetes/kubernetes/blob/7b9d244efd19f0d4cce4f46d1f34a6c7cff97b18/test/integration/podgc/podgc_test.go#L313)
reproduced the issue before withdrawing from SSA in PodGC in the [PR #121103](https://github.com/kubernetes/kubernetes/pull/121103).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fix for this is in flight in kubernetes-sigs/structured-merge-diff#253, I don't think updating the design doc for this is helpful... the github issues mentioned seem sufficient, especially once the SSA impact is fixed

Copy link
Contributor Author

@mimowo mimowo Oct 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't aware of the ongoing fix.

Still, it might be worth it to inform users, who might be using older versions somehow, and the section "known failure modes" seems the valid place. However, if there is consensus this is not the right place, then I'm happy to close the PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the "known failure modes" are about failure modes inherent in the design, but will defer to @apelisse if he thinks it's useful to include this here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this more of a decision of a PRR reviewer. What's your stance @wojtek-t?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be useful to include it here, but since the fix is very close to land, I would wait to see how we manage to backport it and stuff before updating since we will have to update again soon anyways.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would personally merge it, but requires SIG lead approval.

@soltysh - can you please approve?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would personally merge it, but requires SIG lead approval.

@soltysh - can you please approve?

This one falls under api-machinery so @deads2k, @jpbetz or @fedebongio

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chasing the structured-merge-diff failure, it appears to have merged into kube in kubernetes/kubernetes#121575. Is this "SSA doesn't work under these conditions" still valid?

Copy link
Contributor Author

@mimowo mimowo Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC the generic issue remains open: kubernetes/kubernetes#113482
However, the kubernetes/kubernetes#121575 PR fixes the use-case we had for pod disruption conditions, where subresource is patched. So using SSA for status is fixed, but not in general.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have queued at item in the apimachinery agenda for "sometime after kep freeze": https://docs.google.com/document/d/1x9RNaaysyO0gXHIr1y50QFbiL1x8OWnk2v3XnrdkT5Y/edit


* **What steps should be taken if SLOs are not being met to determine the problem?** n/a

Expand Down