-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to detect 410 Gone in watch response? #452
Comments
You're right that watch does not return actual HTTP error codes, but rather passes a value with error message into the block:
This is unfortunately deliberate in k8s (kubernetes/kubernetes#25151, kubernetes/kubernetes#35068 (comment)). Oh well. LOL, in #436 I told myself it's OK not to document how to detect 410 as I intend to fix it "ASAP" 🤣 See also this brain dump / discussion on improving this in kubeclient: #275 (comment). |
See also previous discussion in your project fabric8io/fluent-plugin-kubernetes_metadata_filter#214 |
Thanks for the info. Sounds like the way to check is For the record, I'm not a contributor on that project. I'm just some poor shmuck out here running the elasticsearch-fluentd helm chart in my cluster, seeing the fluentd pod periodically blowing up because of the lack of error handling in that plugin. :-) |
@cben I got ambitious and used your advice to PR a contribution to the fluentd plugin. fabric8io/fluent-plugin-kubernetes_metadata_filter#243 Thanks again. |
BTW, |
References: * ManageIQ/kubeclient#452 (comment) * fabric8io/kubernetes-client#1800 (comment) * kubernetes/kubernetes#25151 (comment) * kubernetes/kubernetes#35068 (comment) * https://github.com/kubernetes/kubernetes/blob/dde6e8e7465468c32642659cb708a5cc922add64/test/e2e/apimachinery/protocol.go#L68-L75 * https://kubernetes.io/docs/reference/using-api/api-concepts/#410-gone-responses * https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions
References: * ManageIQ/kubeclient#452 (comment) * fabric8io/kubernetes-client#1800 (comment) * kubernetes/kubernetes#25151 (comment) * kubernetes/kubernetes#35068 (comment) * https://github.com/kubernetes/kubernetes/blob/dde6e8e7465468c32642659cb708a5cc922add64/test/e2e/apimachinery/protocol.go#L68-L75 * https://kubernetes.io/docs/reference/using-api/api-concepts/#410-gone-responses * https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions
Rationale: avoid ERROR notifications during a rolling upgrade. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade*, the watcher happens to frequently stop after having received an `ERROR` notification: >>> [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} >>> What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting initial the watch at the `resourceVersion` of the `ConfigMapList` itself. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid ERROR notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: >>> [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} >>> What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself : this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid ERROR notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: >>> [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} >>> What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com>
Rationale: avoid ERROR notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: >>> [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} >>> What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid ERROR notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: >>> [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} >>> What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com>
Rationale: avoid ERROR notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: >>> [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} >>> What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com>
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com>
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com>
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com>
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com>
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as a side effect of `getPropertySourcesFromConfigMaps` (inspired by how `PropertySource`s get cached through `configMapAsPropertySource`) and later using it when installing the watcher.
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com>
Rationale: avoid `ERROR` notifications during rolling upgrades. Using the greatest `resourceVersion` of all the `ConfigMap`s returned within the `ConfigMapList` works as expected for *fresh* deployments. But, when performing a *rolling upgrade* (and depending on the upgrade strategy), the watcher happens to frequently stop after having received an `ERROR` notification: > [ERROR] [KubernetesConfigMapWatcher] [] Kubernetes API returned an error for a ConfigMap watch event: ConfigMapWatchEvent{type=ERROR, object=ConfigMap{metadata=Metadata{name='null', namespace='null', uid='null', labels={}, resourceVersion=null}, data={}}} What's actually streamed in that case is a `Status` object such as: ```json { "type": "ERROR", "object": { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Expired", "message": "too old resource version: 123 (456)", "reason": "Gone", "code": 410 } } ``` A few references: * ManageIQ/kubeclient#452 * https://www.baeldung.com/java-kubernetes-watch#1-resource-versions It's possible to recover by adding some logic to reinstall the watcher starting with the newly advertised `resourceVersion`, but this may be avoided at all by starting the initial watch at the `resourceVersion` of the `ConfigMapList` itself: this one won't expire. The proposed implementation consists in storing last received `resourceVersion` as an additional `PropertySource` (through a dedicated name and version key) and later using it when installing the watcher. Co-authored-by: Regis Desgroppes <rdesgroppes@users.noreply.github.com> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com> Co-authored-by: Georg Rollinger <georg.rollinger@posteo.net> Co-authored-by: Pavol Gressa <pavol.gressa@oracle.com>
https://github.com/abonas/kubeclient#starting-watch-version
In the documentation for the watch API, there's a great callout that clients need to handle 410 Gone errors. But from looking at the documentation, it's not clear how to detect this case. Can a little clarification please be added to the doc to explain how to properly introspect the notice to find the status code?
e.g. Should we expect to see
notice['object']['status'] == 410
or something?I ask this because there's currently a nasty problem in the fluentd "kubernetes metadata" plugin which is a client of this library, where this error is not being handled properly and I'm trying to help resolve the issue. (See: fabric8io/fluent-plugin-kubernetes_metadata_filter#226)
The text was updated successfully, but these errors were encountered: