Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PR] Run daemons only as long as they match the filtering criteria #342

Closed
5 tasks done
kopf-archiver bot opened this issue Aug 18, 2020 · 0 comments
Closed
5 tasks done

[PR] Run daemons only as long as they match the filtering criteria #342

kopf-archiver bot opened this issue Aug 18, 2020 · 0 comments
Labels
archive enhancement New feature or request

Comments

@kopf-archiver
Copy link

kopf-archiver bot commented Aug 18, 2020

A pull request by nolar at 2020-04-06 20:09:26+00:00
Original URL: zalando-incubator/kopf#342
Merged by nolar at 2020-04-07 12:06:21+00:00

What do these changes do?

Run daemons only as long as they match the filtering criteria, making the daemon's filters continuously evaluated. Stop and re-spawn the daemons as soon as they stop or start matching the criteria (any number of times).

Description

While implementing an operator for EphemeralVolumeClaim resource (which is also Kopf's tutorial) with Kopf 0.27rc1, it has become clear that the newly introduced daemons (#330) have ambiguous behaviour when combined with filters: they were spawned only on the resource creation or operator startup, and never re-evaluated even when the criteria changes or the resource stops matching the criteria.

This problem didn't exist with the regular short-run handlers, as they were selected each time the changes/events happened, and never existed for long time.

This PR brings the daemons & timers with filters to clear and consistent behaviour:

  • Once the resource stops matching the daemon's criteria, the daemon is stopped too.
  • Once the resource starts matching the daemon's criteria again, the daemon is started again too.

Semantically, the daemon's filters define when the daemon should be running on a continuous basis, not only when it should be spawned on creation/restart (and then ignored afterwards).

The spawning/stopping can happen both due to the resource changes or the criteria changes (but triggered only on the resource changes).


For example, consider an operator:

import asyncio
import kopf

def should_daemon_run(spec, **_):
    return spec.get('field', '').startswith('value')

@kopf.daemon('zalando.org', 'v1', 'kopfexamples', when=should_daemon_run, cancellation_timeout=1.0)
async def my_daemon(logger, **_):
    while True:
        await asyncio.sleep(5.0)
        logger.info("==> ping")

Once created with an example object (which has spec.field == "value"), it will be instantly spawned.

[2020-04-06 21:53:19,192] kopf.objects         [DEBUG   ] [default/kopf-example-1] Adding the finalizer, thus preventing the actual deletion.
[2020-04-06 21:53:19,194] kopf.objects         [DEBUG   ] [default/kopf-example-1] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}
[2020-04-06 21:53:19,199] kopf.objects         [DEBUG   ] [default/kopf-example-1] Daemon 'my_daemon' is invoked.
[2020-04-06 21:53:19,319] kopf.objects         [DEBUG   ] [default/kopf-example-1] Handling cycle is finished, waiting for new changes since now.
[2020-04-06 21:53:24,205] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping
[2020-04-06 21:53:29,210] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping

Then, we can modify the object so that it mismatches the criteria (or we could modify the criteria and trigger an event on the resource):

kubectl patch -f examples/obj.yaml --type merge -p '{"spec": {"field": "other-value"}}'

The daemon will be stopped, as it mismatches the criteria now.

[2020-04-06 21:54:09,241] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping
[2020-04-06 21:54:12,023] kopf.objects         [DEBUG   ] [default/kopf-example-1] Removing the finalizer, as there are no handlers requiring it.
[2020-04-06 21:54:12,024] kopf.objects         [DEBUG   ] [default/kopf-example-1] Daemon 'my_daemon' is signalled to exit by force.
[2020-04-06 21:54:12,024] kopf.objects         [DEBUG   ] [default/kopf-example-1] Patching with: {'metadata': {'finalizers': []}}
[2020-04-06 21:54:12,027] kopf.objects         [WARNING ] [default/kopf-example-1] Daemon 'my_daemon' is cancelled. Will escalate.
[2020-04-06 21:54:12,038] kopf.objects         [DEBUG   ] [default/kopf-example-1] Sleeping was skipped because of the patch, 1.0 seconds left.
[2020-04-06 21:54:12,145] kopf.objects         [DEBUG   ] [default/kopf-example-1] Handling cycle is finished, waiting for new changes since now.

Then, we can revert the change:

kubectl patch -f examples/obj.yaml --type merge -p '{"spec": {"field": "value-123"}}'

The daemon will be spawned again, because it matches the criteria again:

[2020-04-06 21:55:05,378] kopf.objects         [DEBUG   ] [default/kopf-example-1] Adding the finalizer, thus preventing the actual deletion.
[2020-04-06 21:55:05,379] kopf.objects         [DEBUG   ] [default/kopf-example-1] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}
[2020-04-06 21:55:05,381] kopf.objects         [DEBUG   ] [default/kopf-example-1] Daemon 'my_daemon' is invoked.
[2020-04-06 21:55:05,503] kopf.objects         [DEBUG   ] [default/kopf-example-1] Handling cycle is finished, waiting for new changes since now.
[2020-04-06 21:55:10,382] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping
[2020-04-06 21:55:15,387] kopf.objects         [INFO    ] [default/kopf-example-1] ==> ping

And so on.

Please also notice how the finalizer is added and removed to keep the resource blocked from deletion as long as any daemons are running, and free for deletion if not running (this was part of the original implementation, which is now adjusted to fit into this highly dynamic filtering).


A little note: once the daemon exits on its own accord, i.e. without being terminated by the framework, it is considered as intentional termination, and the daemon will never be spawned again within the current operator process.

For cross-restart prevention, there is currently no syntax feature available, but there is a simple trick to achieve it in 2 extra lines of code (documented in this PR too).

Issues/PRs

Issues: #19

Related: #330 #150 #271 #317 #122

Type of changes

  • New feature (non-breaking change which adds functionality)

Checklist

  • The code addresses only the mentioned problem, and this problem only
  • I think the code is well written
  • Unit tests for the changes exist
  • Documentation reflects the changes
  • If you provide code modification, please add yourself to CONTRIBUTORS.txt
@kopf-archiver kopf-archiver bot closed this as completed Aug 18, 2020
@kopf-archiver kopf-archiver bot changed the title [archival placeholder] [PR] Run daemons only as long as they match the filtering criteria Aug 19, 2020
@kopf-archiver kopf-archiver bot added the enhancement New feature or request label Aug 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
archive enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

0 participants