-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Getting ERROR WS Error <Capacity>: Reached the limit of the output buffer for the connection. errors #2039
Comments
No, unfortunately not https://github.com/paritytech/substrate/blob/master/client/rpc-servers/src/lib.rs#L116-#L139 //cc @tomusdrw shall we make it configurable in substrate? |
@mariopino the output buffer is limited to prevent DOS vectors, and afaict it's configured to be 15MB which should be enough for all reasonable use cases, what kind of request/subscriptions does your client perform? Perhaps the node is producing output too fast for your client to read it? |
Thanks @tomusdrw and @niklasad1 I'm using nodejs and polkadot-js/api and many parallel queries like:
In polkadot that equals to get the identity of around 8k addresses. |
Okay, so most likely the server is producing responses faster than your client is able to consume them. You could send even more by opening multiple WS connections (each connection will have it's own limit) if that's expected, but I'd rather consider throttling the amount of request made by your client. We can obviously easily make the output buffer configurable, but so far we were trying to avoid exposing too many options of the RPC server. Usually it's just better to put a reverse proxy in front of your node and configure it in a way that suits you best (that can include load balancing, buffering, etc). |
Thanks for the advise! I think 'll try both: opening multiple WS connections (I'm using the same for all backend queries) and using a reverse proxy. |
Is it possible that the buffer is not cleaned when clients disconnect? |
I think the I think the easiest way to determine if you hunch checks out is to connect one client at the time perform one call and drop it in a loop.
Meanwhile there is a new CLI option |
I'm try to query storages (parallal 8 queries in 1 WS connection, total result ~ 20M) in a I tried to set and so weird that it worked previously (the same binary), maybe there has another vector trigger this error? UPDATE: It's really weird that the same Polkadot.js script, running at macOS triggered the error, running at Linux there's no problem |
Just sent a PR paritytech/cumulus#907 for Cumulus projects |
Closing this because it's related to the old |
I'm using a archive node in polkadot chain for PolkaStats backend and I'm getting this messages:
Any way to increase the output buffer size? Or anyway I can debug this.
The text was updated successfully, but these errors were encountered: