You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The call to DurableClient's StartNewAsync<T>(string orchestratorFunctionName, string instanceId, T input) throws an exception when the given input size in serialized form (JSON) is between 32769..45150 characters.
Expected behavior
When the size is <= 32768 characters, the serialized data should be automatically inlined in a table column for the orchestrator.
When the size is > 32768 characters, the serialized data should be uploaded as a blob for the orchestrator.
The automatic behavior should properly detect whether the data can be stored in a table or not, and too large data should be stored in a blob without the user ever needing to know this or take this behavior into consideration.
Actual behavior
With serialized data lengths between 32769..45150 characters the StartNewAsync call throws an exception.
The call stack reveals a Bad Request is made to the Table Storage.
It's should also be mentioned that although the StartNewAsync throws, the actual Orchestrator Function does get started, and it does receive the expected input.
These errors/call stack was produced:
System.Private.CoreLib: Exception while executing function: DurableTrigger. Microsoft.WindowsAzure.Storage: Bad Request.
ExtendedErrorInformation: The property value exceeds the maximum allowed size (64KB). If the property value is a string, it is UTF-16 encoded and the maximum number of characters should be 32K or less.
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteAsyncInternal[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext, CancellationToken token)
at DurableTask.AzureStorage.Tracking.AzureTableTrackingStore.SetNewExecutionAsync(ExecutionStartedEvent executionStartedEvent, String eTag, String inputStatusOverride) in C:\source\durabletask\src\DurableTask.AzureStorage\Tracking\AzureTableTrackingStore.cs:line 845
at DurableTask.AzureStorage.AzureStorageOrchestrationService.CreateTaskOrchestrationAsync(TaskMessage creationMessage, OrchestrationStatus[] dedupeStatuses) in C:\source\durabletask\src\DurableTask.AzureStorage\AzureStorageOrchestrationService.cs:line 1362
at DurableTask.Core.TaskHubClient.InternalCreateOrchestrationInstanceWithRaisedEventAsync(String orchestrationName, String orchestrationVersion, String orchestrationInstanceId, Object orchestrationInput, IDictionary`2 orchestrationTags, OrchestrationStatus[] dedupeStatuses, String eventName, Object eventData, Nullable`1 startAt) in C:\source\durabletask\src\DurableTask.Core\TaskHubClient.cs:line 608
at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableClient.Microsoft.Azure.WebJobs.Extensions.DurableTask.IDurableOrchestrationClient.StartNewAsync[T](String orchestratorFunctionName, String instanceId, T input) in D:\a\r1\a\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\ContextImplementations\DurableClient.cs:line 149
at DurableFunctionsDebug.Durables.Run(HttpRequest req, IDurableClient durableClient, ILogger log) in F:\Dev\DurableFunctionsDebug\DurableFunctionsDebug\Durables.cs:line 48
In addition, the behavior in the <hubname>Instances table is different: in normal situation, we get a proper pending status row just prior to running the orchestrator:
And when we successfully complete, we get a proper completed status.
However in the case where the StartNewAsync throws, we never get a pending status row. Only after the Orchestrator runs, we get a new row:
Note that the TaskHubName is not set for this row. Not sure if this is significant or not, but nevertheless a finding.
Ensure your data that gets input to the StartNewAsync method is always over 45150 characters in serialized form to force reliable using of blob storage for your input data. For example adding a frame with 45150 characters of "padding data" does the trick.
App Details
Durable Functions extension version (e.g. v1.8.3): 2.3
Azure Functions runtime version (1.0 or 2.0): 3.0.9
Programming language used: C#
In Azure
The reproduction sample project was not deployed to Azure, but the same issue was verified in another project running in Azure. This issue is not tied to Azure Storage Emulator.
The text was updated successfully, but these errors were encountered:
Description
The call to DurableClient's
StartNewAsync<T>(string orchestratorFunctionName, string instanceId, T input)
throws an exception when the given input size in serialized form (JSON) is between 32769..45150 characters.Expected behavior
When the size is <= 32768 characters, the serialized data should be automatically inlined in a table column for the orchestrator.
When the size is > 32768 characters, the serialized data should be uploaded as a blob for the orchestrator.
The automatic behavior should properly detect whether the data can be stored in a table or not, and too large data should be stored in a blob without the user ever needing to know this or take this behavior into consideration.
Actual behavior
With serialized data lengths between 32769..45150 characters the
StartNewAsync
call throws an exception.The call stack reveals a Bad Request is made to the Table Storage.
It's should also be mentioned that although the
StartNewAsync
throws, the actual Orchestrator Function does get started, and it does receive the expected input.These errors/call stack was produced:
In addition, the behavior in the
<hubname>Instances
table is different: in normal situation, we get a proper pending status row just prior to running the orchestrator:And when we successfully complete, we get a proper completed status.
However in the case where the
StartNewAsync
throws, we never get a pending status row. Only after the Orchestrator runs, we get a new row:Note that the
TaskHubName
is not set for this row. Not sure if this is significant or not, but nevertheless a finding.Relevant source code snippets
The bug is very easy to reproduce, all you need is a trigger Function and an Orchestrator function, and then input of different sizes.
A full clean reproduction of this bug is available here as a single Function App project:
https://github.com/Jusas/DurableFunctionsOrchestratorInputSizeBug
Known workarounds
Ensure your data that gets input to the
StartNewAsync
method is always over 45150 characters in serialized form to force reliable using of blob storage for your input data. For example adding a frame with 45150 characters of "padding data" does the trick.App Details
In Azure
The reproduction sample project was not deployed to Azure, but the same issue was verified in another project running in Azure. This issue is not tied to Azure Storage Emulator.
The text was updated successfully, but these errors were encountered: