You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kopf-based operator reacts to the object creation events in some cases. In more details, it is described in kubernetes-client/python#819.
Briefly: it is caused by how a kubernetes client library is implemented: it remembers the last seen resource version among all objects as they are listed on the initial call. Kubernetes lists them in arbitrary order, so the old ones can be the latest in the list. Then, the client library uses that old resource version to re-establish the watch connection, which replays all the old events since that moment in time when this resource version was the latest. This also includes the creation, modification, and even the deletion events for the objects that do not exist anymore.
In practice, it means that the operator will call the handlers, which can potentially create the children objects and do some other side effects. In our case, it happened every day when some cluster events were executed.; but it could happen any time the existing watch connection is re-established.
Expected Behaviour
The operator framework should follow the "eventual consistency" principle, which means that only the last state (the latest resource version, the latest event) should be handled.
Since the events are streaming, the "batch of events" can be defined as a time-window of e.g. 0.1s — fast enough to not delay the reaction in normal cases, but slow enough to process all events happening in a row.
Please note the random order of resource_versions. Depending on your luck and current state of the cluster, you can get either the new enough, or the oldest resource in the last line.
Let's use the latest resource_version 223394843 with a new watch object:
All this is dumped immediately, nothing happens in the cluster during these operations. All these changes are old, i.e. not expected, as they were processed before doing list...().
Please note that even the deleted non-existing resource are yielded ("expr1").
Actual Behaviour
Kopf-based operator reacts to the object creation events in some cases. In more details, it is described in kubernetes-client/python#819.
Briefly: it is caused by how a kubernetes client library is implemented: it remembers the last seen resource version among all objects as they are listed on the initial call. Kubernetes lists them in arbitrary order, so the old ones can be the latest in the list. Then, the client library uses that old resource version to re-establish the watch connection, which replays all the old events since that moment in time when this resource version was the latest. This also includes the creation, modification, and even the deletion events for the objects that do not exist anymore.
In practice, it means that the operator will call the handlers, which can potentially create the children objects and do some other side effects. In our case, it happened every day when some cluster events were executed.; but it could happen any time the existing watch connection is re-established.
Expected Behaviour
The operator framework should follow the "eventual consistency" principle, which means that only the last state (the latest resource version, the latest event) should be handled.
Since the events are streaming, the "batch of events" can be defined as a time-window of e.g. 0.1s — fast enough to not delay the reaction in normal cases, but slow enough to process all events happening in a row.
Steps to Reproduce the Problem
Create some amount (10-20) of objects.
Example for my custom resource kind:
Please note the random order of resource_versions. Depending on your luck and current state of the cluster, you can get either the new enough, or the oldest resource in the last line.
Let's use the latest resource_version
223394843
with a new watch object:Well, okay, let's try the recommended resource_version, which is at least known to the API:
All this is dumped immediately, nothing happens in the cluster during these operations. All these changes are old, i.e. not expected, as they were processed before doing
list...()
.Please note that even the deleted non-existing resource are yielded ("expr1").
Specifications
Kubernetes version:
Python version:
Python packages installed: (use
pip freeze --all
)Released as 0.10
The text was updated successfully, but these errors were encountered: