-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try a WorkerContext? #1
Comments
Hey @richardsheridan, impressed you noticed this! I'm traveling right now btw, so my replies may be somewhat limited until next week. Short story is that this really is a wrapper around a I would say that if the On the subject of wishlists, it would also be great to have a way to cancel tasks on workers less violently than SIGKILL. I'd love for cancellation to propagate into the worker process as Overall though, thanks for trio-parallel, I found it very helpful and usable! |
Cool! Let me break this down, in reverse order. Actually, SIGINT can work as you describe by running I don't have a generic way to run code that respects trio-style cancellation. I don't want to maintain something that complex, so if you really need that, you should try tractor. Worker scaling is "automatic" based on the way LIFO caching interacts with the If on the other hand you really want a specific, constant number of workers, you can just do Finally, consider retiring workers instead of restarting the pool if what you need is fresh processes. Still, restarting the pool is also an intended use-case, but a trionic recipe that doesn't touch internal methods would look more like this: def _worker_init():
print(f"worker init {os.getpid()}")
import_user_code(USER_CODE_PATH)
def _handle_changed_file(file):
print(f"changed {file} in worker {os.getpid()}")
def _drain_and_merge_batches(batch):
batch = set(batch)
try:
while True:
batch |= change_recv.receive_nowait()
except trio.WouldBlock:
return batch
async def submit_batch(batch):
async with trio_parallel.open_worker_context() as ctx, trio.open_nursery() as nursery:
for changed_file in batch:
nursery.start_soon(
ctx.run_sync,
_handle_changed_file,
changed_file,
limiter=limiter,
),
)
if __name__ == "__main__":
import trio
import trio_parallel
from fused_local.user_code import (
import_user_code,
watch_with_channel, # highly recommend to make this
)
USER_CODE_PATH = Path.cwd() / "example.py"
N_WORKERS = 2
change_send, change_recv = trio.open_memory_channel(float("inf"))
limiter = trio.CapacityLimiter(N_WORKERS)
async def main():
async with trio.open_nursery() as nursery:
nursery.start_soon(
watch_with_channel, USER_CODE_PATH, change_send
)
for changed_file_batch in change_recv:
changed_file_batch = _drain_and_merge_batches(changed_file_batch)
await submit_batch(changed_file_batch)
# or maybe you're hasty
# something_to_cancel_previous_batch()
# nursery.start_soon(submit_batch, changed_file_batch)
try:
trio.run(main)
except* KeyboardInterrupt:
print("shut down") Not tested or anything, but it gives you the shape of things. You could also wire up a cancelscope to handle early stopping for rapid changes if handling changes takes a long time. |
Hi, is there any reason that this wrapper can't be replaced with a WorkerContext?
fused-local/src/fused_local/workers.py
Lines 24 to 34 in 9733b4a
If there's something I could do to make my feature a little more usable or better documented, I'd like to hear it. :)
The text was updated successfully, but these errors were encountered: