Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way for the client to figure out if the payload is too large #1518

Closed
andre4i opened this issue Dec 15, 2021 · 6 comments
Closed

A way for the client to figure out if the payload is too large #1518

andre4i opened this issue Dec 15, 2021 · 6 comments
Labels
enhancement New feature or request
Milestone

Comments

@andre4i
Copy link

andre4i commented Dec 15, 2021

Is your feature request related to a problem? Please describe.

I noticed the behavior when the client sends a large payload, which exceeds the server configured maxHttpBufferSize. In this situation, the client will get disconnected and the reason supplied to the event handler is transport error.

Any resilience mechanism which would try to reconnect and resend the message will be oblivious to the underlying 'Max payload size exceeded error and enter a costly (for both client and server) loop of retries. transport error being generic enough to cover a set of error scenarios which are (eventually) recoverable, while the payload size being larger than configured would not.

Describe the solution you'd like

  • One possibility would be to extend the disconnect event handler to provide a full error object which can be inspected and reasoned about (something which may have a flag related to whether or not a reconnect would be futile).
  • Another possibility would be for the client to have an error event raised in this situation instead of the disconnect, again with an object holding more information.

Describe alternatives you've considered

A workaround would be possible for the client to know the maxHttpBufferSize configuration of the server, measure each message and self-govern its payload size. However, this is again costly, as messages are already measured in the socket.io stack where another set of identical validations take place.

Additional context

See this resilience behavior here. The server is configured with an impossible maxHttpBufferSize value, which makes the client enter an endless loop of reconnects:

image

@andre4i andre4i added the enhancement New feature or request label Dec 15, 2021
@darrachequesne
Copy link
Member

That sounds like a good improvement 👍

Currently, the HTTP request or the WebSocket connection get closed if any message is bigger than the maxHttpBufferSize value:

But the client indeed does not have the reason for the "transport close" error.

@andre4i
Copy link
Author

andre4i commented Jan 4, 2022

Hi @darrachequesne, thanks for the input. What is your opinion about a proper way to improve this experience? Should this be a new reason for the disconnect event at https://socket.io/docs/v3/client-socket-instance/#disconnect?

darrachequesne added a commit to socketio/engine.io-client that referenced this issue Apr 11, 2022
The close event will now include additional details to help debugging
if anything has gone wrong.

Example when a payload is over the maxHttpBufferSize value in HTTP
long-polling mode:

```js
socket.on("close", (reason, details) => {
  console.log(reason); // "transport error"

  // in that case, details is an error object
  console.log(details.message); "xhr post error"
  console.log(details.description); // 413 (the HTTP status of the response)

  // details.context refers to the XMLHttpRequest object
  console.log(details.context.status); // 413
  console.log(details.context.responseText); // ""
});
```

Note: the error object was already included before this commit and is
kept for backward compatibility

Example when a payload is over the maxHttpBufferSize value with
WebSockets:

```js
socket.on("close", (reason, details) => {
  console.log(reason); // "transport close"

  // in that case, details is a plain object
  console.log(details.description); // "websocket connection closed"

  // details.context is a CloseEvent object
  console.log(details.context.code); // 1009 (which means "Message Too Big")
  console.log(details.context.reason); // ""
});
```

Example within a cluster without sticky sessions:

```js
socket.on("close", (reason, details) => {
  console.log(details.context.status); // 400
  console.log(details.context.responseText); // '{"code":1,"message":"Session ID unknown"}'
});
```

Note: we could also print some warnings in development for the "usual"
errors:

- CORS error
- HTTP 400 with multiple nodes
- HTTP 413 with maxHttpBufferSize

but that would require an additional step when going to production
(i.e. setting NODE_ENV variable to "production"). This is open to
discussion!

Related:

- socketio/socket.io#3946
- socketio/socket.io#1979
- socketio/socket.io-client#1518
@darrachequesne
Copy link
Member

The disconnect event will now provide additional details:

socket.on("disconnect", (reason, details) => {
  console.log(reason); // "transport error"

  // in that case, details is an error object
  console.log(details.message); "xhr post error"
  console.log(details.description); // 413 (the HTTP status of the response)

  // details.context refers to the XMLHttpRequest object
  console.log(details.context.status); // 413
  console.log(details.context.responseText); // ""
});

Implemented in b862924 and included in version 4.5.0.

@darrachequesne darrachequesne added this to the 4.5.0 milestone May 12, 2022
@NicholasCouri
Copy link

I know the issue is closed but , @darrachequesne Is there a way to log or retrieve, from the client side, the payload size we tried to send ?

@TechAkayy
Copy link

It would be good to know the payload size that triggered the disconnect, so that we can dynamically calculate a chunkSize and relay the data across. Without the payload size, it's not possible to make any calculations and keep the comms stable & reliable. Thanks in advance.

@darrachequesne
Copy link
Member

@TechAkayy that's a good idea. We might even include it in the library, which might prevent the socket from disconnecting during a big emit. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants