-
-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: batch window suppression #829
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: paxbit <home@tecnique.de>
a54f7cc
to
66dc4af
Compare
Thanks for this PR and for your time spent on it. However, I think, I will not merge it — but for a good cause: While playing with two separate changes in separate branches, and after digging into the history of this "batching" solution, I have found that it was implemented for the only purpose — ensuring the consistency with the resource recently patched by Kopf itself. And this consistency was based on the assumption that the patched versions arrives into the queue fast enough. The assumption is wrong. It can take longer than 0.1 seconds to get the patched version. Instead, resourceVersion should be used to ensure consistency (as got from the PATCH operation). I have this change already implemented, and now only struggle with auto-testing this case (asyncio tests are hard). Once the resourceVersion-based consistency is ensured, the "batching" is not needed at all: it is not a feature of Kopf, it was a hack in the first place. And so, I am going to remove it completely. However, with no "batching" in place, the question remains how to process the quickly arriving changes of a resource (as mentioned in your PR's comments:
This is the 1st change of the mentioned two. The 2nd change is processing ALL incoming low-level events, including those considered "inconsistent" during those short time windows when the patch is performed but the patched version is not yet received. They must be excluded both from batching and from consistency logic. This requires some more sophisticated split of event- & change-processing — which I also have drafted, but not properly stabilised. It can take some time for me to finish those 2 PRs. Meanwhile, regarding this PR and to ensure that you are not blocked: can you achieve this same effect by setting the batch window to |
Hi @nolar, Thanks for having a look! I think it is a good idea to get rid of batching altogether and I now understand a bit better what it was meant to solve in the first place. Regarding setting the batching window to
Both patches did solve our issues for the last months. You wrote above you have sth. in mind for the 2nd part. Can you make use of the caching idea or is it something you would rather disregard? Looking forward to the release integrating both topics :) |
Hi @nolar,
it's been a while since #729. Back then I monkey patched the topics discussed in the issue and went with that up till now.
I'd like to split this in two parts of which this PR is the first. It disables the batch_window by default and allows to return to the previous behaviour on-demand. I believe the idea behind batch_window is viable in some cases but incorrect to assume for all operator by default. The doc string tries to explain the situation.
It would be great if you could have a look at it.
Thanks!