Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Getting ERROR WS Error <Capacity>: Reached the limit of the output buffer for the connection. errors #2039

Closed
mariopino opened this issue Nov 30, 2020 · 10 comments

Comments

@mariopino
Copy link

I'm using a archive node in polkadot chain for PolkaStats backend and I'm getting this messages:

Nov 30 17:01:44.863  INFO 💤 Idle (27 peers), best: #2696385 (0xb111…f5e1), finalized #2696382 (0x607d…f5ae), ⬇ 144.8kiB/s ⬆ 495.0kiB/s
Nov 30 17:01:48.345 ERROR WS Error <Capacity>: Reached the limit of the output buffer for the connection.
Nov 30 17:01:48.345 ERROR WS Error <Capacity>: Reached the limit of the output buffer for the connection.
Nov 30 17:01:48.511  INFO ✨ Imported #2696386 (0x5788…84ca)

Any way to increase the output buffer size? Or anyway I can debug this.

@niklasad1
Copy link
Member

niklasad1 commented Nov 30, 2020

No, unfortunately not https://github.com/paritytech/substrate/blob/master/client/rpc-servers/src/lib.rs#L116-#L139

//cc @tomusdrw shall we make it configurable in substrate?

@tomusdrw
Copy link
Contributor

@mariopino the output buffer is limited to prevent DOS vectors, and afaict it's configured to be 15MB which should be enough for all reasonable use cases, what kind of request/subscriptions does your client perform?

Perhaps the node is producing output too fast for your client to read it?

@mariopino
Copy link
Author

Thanks @tomusdrw and @niklasad1 I'm using nodejs and polkadot-js/api and many parallel queries like:

const allNominatorIdentities = await Promise.all(
  allNominatorAddresses.map(accountId => api.derive.accounts.info(accountId))
);

In polkadot that equals to get the identity of around 8k addresses.

@tomusdrw
Copy link
Contributor

Okay, so most likely the server is producing responses faster than your client is able to consume them. You could send even more by opening multiple WS connections (each connection will have it's own limit) if that's expected, but I'd rather consider throttling the amount of request made by your client.

We can obviously easily make the output buffer configurable, but so far we were trying to avoid exposing too many options of the RPC server. Usually it's just better to put a reverse proxy in front of your node and configure it in a way that suits you best (that can include load balancing, buffering, etc).

@mariopino
Copy link
Author

Thanks for the advise! I think 'll try both: opening multiple WS connections (I'm using the same for all backend queries) and using a reverse proxy.

@nazar-pc
Copy link
Contributor

nazar-pc commented Nov 4, 2021

Is it possible that the buffer is not cleaned when clients disconnect?
I was pushing hundreds of transactions per second to the node, then disconnected all RPC clients (there were 3), but still getting WS Error <Capacity>: Reached the limit of the output buffer for the connection. message in the log.

@niklasad1
Copy link
Member

niklasad1 commented Nov 4, 2021

Is it possible that the buffer is not cleaned when clients disconnect?
I was pushing hundreds of transactions per second to the node, then disconnected all RPC clients (there were 3), but still getting WS Error : Reached the limit of the output buffer for the connection. message in the log.

I think the jsonrpc/ws-rs creates a buffer for each connection and applies the limit on that but not sure.
It's possible but could also be that the methods that were called/executed produced a "too big response" such as reading state or something.

I think the easiest way to determine if you hunch checks out is to connect one client at the time perform one call and drop it in a loop.

for _ in 0..BIG_NUMBER {
   let client = new_client();
   client.request(my_transaction);
   drop(client)
}

Meanwhile there is a new CLI option --ws-max-out-buffer-capacity if you run your own node.

@jasl
Copy link
Contributor

jasl commented Jan 10, 2022

I'm try to query storages (parallal 8 queries in 1 WS connection, total result ~ 20M) in a while(true), it will raise WS Error <Capacity>: Reached the limit of the output buffer for the connection. after 1 iteration, and have to disconnect & re-connect.

I tried to set --ws-max-out-buffer-capacity 1024 --rpc-max-payload 1000 but it doesn't work.

and so weird that it worked previously (the same binary), maybe there has another vector trigger this error?

UPDATE: It's really weird that the same Polkadot.js script, running at macOS triggered the error, running at Linux there's no problem

@jasl
Copy link
Contributor

jasl commented Jan 16, 2022

Just sent a PR paritytech/cumulus#907 for Cumulus projects

@niklasad1
Copy link
Member

Closing this because it's related to the old jsonrpc ws server which has been replaced with jsonrpsee and it's configurable.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants