Skip to content
This repository has been archived by the owner on Jul 19, 2024. It is now read-only.

Overwrite max message size in host.json #29

Closed
alrod opened this issue Oct 1, 2019 · 14 comments · Fixed by #77
Closed

Overwrite max message size in host.json #29

alrod opened this issue Oct 1, 2019 · 14 comments · Fixed by #77
Assignees

Comments

@alrod
Copy link
Member

alrod commented Oct 1, 2019

By default max message is 256КB what corresponds to basic plan. We need to add ability to overwrite default value in host.json settings as standard plan has 1MB max message size.

Related: #8

@ghost ghost added the Needs: triage 🔍 label Oct 1, 2019
@alrod alrod added this to the Triaged milestone Oct 25, 2019
@nzthiago
Copy link
Member

nzthiago commented Jun 12, 2020

I did some investigation as a customer has recently run into the issue of the max outbound batch size or single message size being 240kb for the output binding.

While this would mitigate #8 I don't believe adding max message size to host.json and putting it on the customer to set the value is the best approach. Instead, an effort could be made into migrating the Azure Functions Event Hubs extension to the latest Event Hubs SDK and then leveraging the Azure.Messaging.EventHubs.EventHubProducerClient to create/send the EventDataBatch and use TryAdd to find out when a batch max size was reached to send a batch. Use of this SDK would get a dynamic measure of the max message size as the new SDK dynamically discovers the maximum size allowed for a message by querying the service rather than using a hardcoded value that may be incorrect for some SKUs. This would also address another possible issue with the current implementation, as the calculation is only taking into consideration the body of EventData and not other user properties of the event.

Note: the Event Hubs .NET SDK currently being used by the Event Hubs Functions extension also has EventDataBatch but that version still hardcodes the max message/batch size to 1MB so would be an issue if the customer's tier is Basic with max size of 256kb, hence the recommendation of using the new SDK instead.

@zawor
Copy link

zawor commented Aug 27, 2020

IMHO TryAdd approach with probing is as well merely workaround of insufficient API of HeventHub client code... Spent on this less than 5min so I assume some flaws, but why not to extend (I am aware that it may be feature request for other team) EventHubProducerClient.getEventHubProperties(options?: GetEventHubPropertiesOptions) to return a plan or max event size in EventHubProperties? this could be called on on init or lazily on 1st evt send and simply would give the clear answer to those kind of concerns.

Edit:
On 2nd thought returning the size would be preferable cause it would save translation of plan to size (which would be hardcoded) and would be to some extent future proof if size limits change in future.

@nzthiago
Copy link
Member

nzthiago commented Aug 27, 2020

@jsquire FYI.

@zawor - thanks! I think that value is available by creating a new CreateBatchOptions class instance or checking the MaximumSizeInBytes value of EventDataBatch if you create the class instances with default values for the batch size, as it would give you "the maximum size allowed by the active transport". I think it'd be good to have that also available on the client class itself too so added @jsquire here. Anyways, as I understand it, if you don't use the TryAdd approach on EventDataBatch you'd have to do the calculation of "will this event with its content and all properties still fit in the max size allowed for this EH transport and tier" yourself instead of letting the event hub SDK do it for you so might as well use TryAdd?

Both approaches would would better though than what is possible with the current version of the Event Hubs SDK used by Azure Functions.

@jsquire
Copy link
Member

jsquire commented Aug 27, 2020

Hey, @nzthiago, thanks for looping me in.

The TLDR version is that the binding should not assume that it can accurately measure the size on its own. I'd recommend using EventDataBatch and TryAdd for this scenario unless the Functions binding is willing to trade-off safety for additional throughput. The difference is whether the client or service rejects a batch that is too large and whether communicating that rejection takes an exception path.

For those interested in details, there are a couple of things germane to this conversation that I'll mention:

  • To accurately determine the size of an event, it must be measured in the format used by the active protocol as well as account for overhead of the batch envelope that will contain it. Even were there a trustworthy sizeof equivalent in .NET to measure the model, it wouldn't be an accurate representation. Best case, you'd end up wasting space because you'd have to reserve overhead padding defensively. Worst, you'd end up with a miscalculation that causes publishing to fail.

  • The size limit for communication is not a property of the Event Hub, it is a property of the communications channel for the active protocol at the entity level. This means that the service could choose to specify a different value for individual partitions of an Event Hub or for the gateway used for automatic routing and servicing of partition keys. Pragmatically, this isn't how the service operates today, but the client cannot make that assumption.

  • Communication channels are ephemeral to a certain extent, where the client will potentially be opening, closing, and recreating links to the service to provide a stable and resilient experience for callers that doesn't waste resources. As the service controls the size, any point where a new link is opened, the maximum size allowed for it may differ. Here again, this isn't likely, but it is possible and so the client cannot make assumptions.

  • The reason that EventDataBatch exists is to provide a deterministic and accurate means to measure the size of a message sent to the service in the context of a single operation, minimizing the chance of failure. Because the batch is created in the context of a link to the service, the size limit is the most accurate available. It is still possible for that value to change and a batch be rejected, but it is as close to per-request validation as possible.

  • In many scenarios where publishers send batches more frequently and don't fear exceeding the size cap, it's entirely reasonable to bypass EventDataBatch and use the SendAsync overload that takes a set of events. In this case, the client will package them and publish, allowing the service to make the validation decision alone. This can help maximize throughput but does present as an exception when the batch is too large.

@zawor
Copy link

zawor commented Aug 27, 2020

Till today I went with even more abstract way as I simply used it as a azure function binding with IAsyncCollector and simply smacking data through AddAsync... But yeah I did some rough estimate of size whether it fits more or less in single event and if not (not that common case) we did detour through blob for data which was to big for event.

Reason why I got burned by it is that I have assumed that I have 1mb according to event hub docs and was checking for roughly 1mb when sending out.

@zawor
Copy link

zawor commented Aug 27, 2020

@jsquire whoa this is great explanation... would love to have this in the docs with huge, bold font.

@nzthiago
Copy link
Member

@jsquire - great explanation and detail! Agree with @zawor on adding that to docs somewhere. Also: bonus points for using germane, haven't seen that word in a while hehe.

@jsquire
Copy link
Member

jsquire commented Aug 31, 2020

Thanks for the kind words and suggestion, folks. I'll see what I can do about getting the documentation updated. @zawor: Was there a particular page of the documentation that you felt would benefit from some of this information and, if so, would you be so kind as to point me at it?

@zawor
Copy link

zawor commented Sep 1, 2020

@jsquire THB not quite... as my gut feeling tells me that it should land somewhere in the Concepts or maybe How-to Guides sections under EventHub Docs but on the other hand IMHO something like "High level arcanes of messaging" under or accessible from output binding docs would do job better (at least for folks which come from functions ballpark).

@nzthiago
Copy link
Member

nzthiago commented Oct 12, 2020

@alrod @paulbatum thank you for this fix! is there a separate issue to track updating the extension to the latest SDK?

@jsquire
Copy link
Member

jsquire commented Oct 12, 2020

@alrod @paulbatum thank you for this fix! is there a separate issue to track updating the extension to the latest SDK?

Stop calling me by other people's names and I'll help. 😛

@nzthiago
Copy link
Member

@alrod @paulbatum thank you for this fix! is there a separate issue to track updating the extension to the latest SDK?

Stop calling me by other people's names and I'll help. 😛

:) - shouldn't at least #15173 be an entry in this azure-functions-eventhubs-extension repo here though?

@jsquire
Copy link
Member

jsquire commented Oct 12, 2020

'twas not my decision, I'm just reporting the news. You may want to ping Alex on 15173 and ask, if you're interested in the context.

@jsquire
Copy link
Member

jsquire commented Oct 28, 2020

'twas not my decision, I'm just reporting the news. You may want to ping Alex on 15173 and ask, if you're interested in the context.

Well, it would seem my alias link somehow failed and I'm instead making reference to a random name. Let's try again.... @AlexGhiondea owns the work stream for porting the functions bindings to the latest SDK as well as those issues.

sidkri added a commit that referenced this issue Dec 8, 2020
* Add EventHub output logging. Fixes #60.
* Unifying logging of exceptions. Fixes #59.
* Fix nuget errors on pack (#73)
* Increasing max message size and batch size. Fixes #29
* User configurable initial offset support (#79)
* Instructions via Readme.md for setting up local environment to run integration tests

Co-authored-by: Alexey Rodionov <alrod@microsoft.com>
Co-authored-by: Pragna Gopa <pgopa@microsoft.com>
Co-authored-by: Sid Krishna <sidkri@microsoft.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants