-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix failing tests on the DurableTask.Emulator, fixing https://github.com/Azure/durabletask/issues/255 #267
Conversation
There seems to be a permissions issue with the CI |
mapState = new Dictionary<string, OrchestrationState>(); | ||
this.instanceStore[workItem.InstanceId] = mapState; | ||
} | ||
CommitState(newOrchestrationRuntimeState, state).GetAwaiter().GetResult(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have to Use GetAwaiter().GetResult(0 here since we have a lock here, might make sense to move over to a more async friendly construct in the future.
|
||
if (tasks.Count > 0) | ||
{ | ||
Task.WaitAll(tasks.ToArray()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this need to be blocking, also in a lock? should we use Task.WhenAll with the getawater to be consistent and when we flip to async this is correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe you can do a GetAwaiter().GetResult() with a whenall since it doesn't return a result.
Moving it out of the lock should not have any side effects as far as I can tell as it is simply signaling the clients waiting for the orchestration, but would rather not risk it for this change. I may be more suited for when we do move things to async.
It looks like there is an issue in Azure DevOps where pull requests from external contributors are not able to actually execute the tests properly. So far I haven't been able to find a workaround, but we're told that the Azure DevOps team is working to address this general problem. I'm going to go ahead and merge this, then run another build to validate that master is green. |
* Fixed Continue as New for the local orchestration service. * Match the logic we are using in the ServiceBus orchestration service for setting the session state.
This Fixes Failing
MockRecreateOrchestration Test:
This was failing as the emulator was not clearing its session state for a non-running orchestration similar tot he way it was being done in the service bus service.
MockGenerationTest:
This was failing as the Continue as the new state of a previous generation was not being saved to the instance store along with the new started ContinueAsNew state.
This fixes #255