Closed
Description
ctx.run
does not properly handle synchronous functions, running them directly in the event loop instead of offloading them. This causes inconsistent behavior:
- Sometimes tasks execute in parallel as expected.
- Other times, they execute sequentially, doubling execution time.
Steps to Reproduce
Run the following code and start two handler invocations at the same time:
@greeter.handler()
async def sleep_handler(ctx: Context, secs: int):
await ctx.run("waiting", lambda: time.sleep(secs))
return f"time.sleep {secs} seconds"
@greeter.handler()
async def greet(ctx: Context, req: GreetingRequest) -> Greeting:
print("before")
promise = ctx.service_call(sleep_handler, arg=5)
promise2 = ctx.service_call(sleep_handler, arg=5)
start_time = time.time()
results = [await promise, await promise2]
end_time = time.time()
print(f"{end_time - start_time=}")
print(f"{req=}")
return Greeting(
message=f"You said hi to {req.name} after {end_time - start_time} seconds!"
)
Expected Behavior
ctx.service_call
should allow both tasks to execute in parallel, finishing in ~5 seconds.
Actual Behavior
- The first
ctx.run
blocks for 5 seconds, then the second runs for another 5 seconds. - Total execution time is ~10 seconds instead of 5, but strangely sometimes it behaves as expected
Example Output (Sequential Execution):
before
before
end_time - start_time=10.020586013793945
req=GreetingRequest(name='Name1')
end_time - start_time=9.34077501296997
req=GreetingRequest(name='Name2')
Possible Root Cause
Currently, ctx.run
executes sync functions inline:
if inspect.iscoroutinefunction(action):
action_result = await action()
else:
action_result = action() # Blocks the event loop
This prevents efficient execution of multiple ctx.run
calls.
Proposed Fix
Modify ctx.run
to automatically offload sync functions:
import asyncio
if inspect.iscoroutinefunction(action):
action_result = await action()
else:
action_result = await asyncio.to_thread(action) # Prevents blocking
Impact
- Prevents event loop blocking.
- Enables multiple
ctx.run
calls to execute concurrently. - Improves performance for CPU-bound and I/O-bound tasks.
- Ensures consistent execution time (always ~5s instead of sometimes 10s).
before
before
end_time - start_time=5.017489910125732
req=GreetingRequest(name='asdf24')
end_time - start_time=5.016418933868408
req=GreetingRequest(name='asdfg24')
Would love to hear thoughts on this!
Restate version: 0.5.1
Metadata
Metadata
Assignees
Labels
No labels