-
Notifications
You must be signed in to change notification settings - Fork 107
Extend wait_for on workflow context to take multiple futures and a condition block #111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,28 @@ | ||
| require 'workflows/wait_for_workflow' | ||
|
|
||
| describe WaitForWorkflow do | ||
|
|
||
| it 'signals at workflow start time' do | ||
| workflow_id = SecureRandom.uuid | ||
| run_id = Temporal.start_workflow( | ||
| WaitForWorkflow, | ||
| 10, # number of echo activities to run | ||
| 2, # max activity parallelism | ||
| 'signal_name', | ||
| options: { workflow_id: workflow_id } | ||
| ) | ||
|
|
||
| Temporal.signal_workflow(WaitForWorkflow, 'signal_name', workflow_id, run_id) | ||
|
|
||
| result = Temporal.await_workflow_result( | ||
| WaitForWorkflow, | ||
| workflow_id: workflow_id, | ||
| run_id: run_id, | ||
| ) | ||
|
|
||
| expect(result.length).to eq(3) | ||
| expect(result[:signal]).to eq(true) | ||
| expect(result[:timer]).to eq(true) | ||
| expect(result[:activity]).to eq(true) | ||
| end | ||
| end |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,80 @@ | ||
| require 'activities/echo_activity' | ||
| require 'activities/long_running_activity' | ||
|
|
||
| # This example workflow exercises all three conditions that can change state that is being | ||
| # awaited upon: activity completion, sleep completion, signal receieved. | ||
| class WaitForWorkflow < Temporal::Workflow | ||
| def execute(total_echos, max_echos_at_once, expected_signal) | ||
| signals_received = {} | ||
|
|
||
| workflow.on_signal do |signal, input| | ||
| signals_received[signal] = input | ||
| end | ||
|
|
||
| workflow.wait_for do | ||
| workflow.logger.info("Awaiting #{expected_signal}, signals received so far: #{signals_received}") | ||
| signals_received.key?(expected_signal) | ||
| end | ||
|
|
||
| # Run an activity but with a max time limit by starting a timer. This activity | ||
| # will not complete before the timer, which may result in a failed activity task after the | ||
| # workflow is completed. | ||
| long_running_future = LongRunningActivity.execute(15, 0.1) | ||
| timeout_timer = workflow.start_timer(1) | ||
| workflow.wait_for(timeout_timer, long_running_future) | ||
|
|
||
| timer_beat_activity = timeout_timer.finished? && !long_running_future.finished? | ||
|
|
||
| # This should not wait further. The first future has already finished, and therefore | ||
| # the second one should not be awaited upon. | ||
| long_timeout_timer = workflow.start_timer(15) | ||
| workflow.wait_for(timeout_timer, long_timeout_timer) | ||
| raise 'The workflow should not have waited for this timer to complete' if long_timeout_timer.finished? | ||
|
|
||
| block_called = false | ||
| workflow.wait_for(timeout_timer) do | ||
| # This should never be called because the timeout_timer future was already | ||
| # finished before the wait was even called. | ||
| block_called = true | ||
| end | ||
| raise 'Block should not have been called' if block_called | ||
|
|
||
| workflow.wait_for(long_timeout_timer) do | ||
| # This condition will immediately be true and not result in any waiting or dispatching | ||
| true | ||
| end | ||
| raise 'The workflow should not have waited for this timer to complete' if long_timeout_timer.finished? | ||
|
|
||
| activity_futures = {} | ||
| echos_completed = 0 | ||
|
|
||
| total_echos.times do |i| | ||
| workflow.wait_for do | ||
| workflow.logger.info("Activities in flight #{activity_futures.length}") | ||
| # Pause workflow until the number of active activity futures is less than 2. This | ||
| # will throttle new activities from being started, guaranteeing that only two of these | ||
| # activities are running at once. | ||
| activity_futures.length < max_echos_at_once | ||
| end | ||
|
|
||
| future = EchoActivity.execute("hi #{i}") | ||
| activity_futures[i] = future | ||
|
|
||
| future.done do | ||
| activity_futures.delete(i) | ||
| echos_completed += 1 | ||
| end | ||
| end | ||
|
|
||
| workflow.wait_for do | ||
| workflow.logger.info("Waiting for queue to drain, size: #{activity_futures.length}") | ||
| activity_futures.empty? | ||
| end | ||
|
|
||
| { | ||
| signal: signals_received.key?(expected_signal), | ||
| timer: timer_beat_activity, | ||
| activity: echos_completed == total_echos | ||
| } | ||
| end | ||
| end |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -215,14 +215,54 @@ def wait_for_all(*futures) | |
| return | ||
| end | ||
|
|
||
| def wait_for(future) | ||
| # Block workflow progress until any future is finished or any unblock_condition | ||
| # block evaluates to true. | ||
| def wait_for(*futures, &unblock_condition) | ||
| if futures.empty? && unblock_condition.nil? | ||
| raise 'You must pass either a future or an unblock condition block to wait_for' | ||
| end | ||
|
|
||
| fiber = Fiber.current | ||
| should_yield = false | ||
| blocked = true | ||
|
|
||
| if futures.any? | ||
| if futures.any?(&:finished?) | ||
| blocked = false | ||
| else | ||
| should_yield = true | ||
| futures.each do |future| | ||
| dispatcher.register_handler(future.target, Dispatcher::WILDCARD) do | ||
| if blocked && future.finished? | ||
| # Because this block can run for any dispatch, ensure the fiber is only | ||
| # resumed one time by checking if it's already been unblocked. | ||
| blocked = false | ||
| fiber.resume | ||
| end | ||
| end | ||
| end | ||
| end | ||
| end | ||
|
|
||
| dispatcher.register_handler(future.target, Dispatcher::WILDCARD) do | ||
| fiber.resume if future.finished? | ||
| if blocked && unblock_condition | ||
| if unblock_condition.call | ||
| blocked = false | ||
| should_yield = false | ||
| else | ||
| should_yield = true | ||
|
|
||
| dispatcher.register_handler(Dispatcher::WILDCARD, Dispatcher::WILDCARD) do | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's a slight problem here — we're not de-registering the handler afterwards, which means that it will be called for every event until the end of a workflow. It won't be doing anything because of that This can be done in a follow-up, I might have a look at it
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suppose once it's unblocked, it can be unregistered. The interplay on this is messy because it needs a callback. I'll see if I can do this cleanly or if it's best to revise later. |
||
| # Because this block can run for any dispatch, ensure the fiber is only | ||
| # resumed one time by checking if it's already been unblocked. | ||
| if blocked && unblock_condition.call | ||
| blocked = false | ||
| fiber.resume | ||
| end | ||
| end | ||
| end | ||
| end | ||
|
|
||
| Fiber.yield | ||
| Fiber.yield if should_yield | ||
|
|
||
| return | ||
| end | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we actually need to separate between the futures and unblock condition cases here? There are probably arguments to both approaches:
I wonder what do you think about this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this code path is used when invoking activities synchronously or sleeps, which are almost certainly the most common cases. I could register separate handlers here, rather than handling the separate cases inside the dispatcher. I'll give this alternative a try to see how it looks once implemented, especially with the unregistration in place.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good 👍 But also happy to rearrange things in later PRs to keep this one contained and ship the feature. As long as public-facing APIs are stable, which I believe they are