-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Weird error message on heavy usage of RPC #10342
Comments
I'm seeing the same problem when putting parity under rpc load:
The node also does not manage to keep a decent number of peers. Seems like it's loosing/dropping peers when the error message is seen. I think the problem is there since the latest RPC related security fixes. I'm running this in docker with:
I've also tried to add
but it shows the same problem. |
Since 2.3.5. was really unsuable for me, I've updated to v2.4.0-beta After some time and RPC load I again see
and the peer count going down. |
Any progress on this? I'm still seeing the same issue:
Version:
|
I've also started seeing this. |
Still seeing the same issue: Parity version: I'm pretty sure it has to do with the At least it causes the issue when calling through my code (which is C++ and uses Curl). I tried reproducing with a straight call to curl from the command line (using the same tx hash):
But that doesn't manifest the problem. Hope this helps. One more note: I'm about 99.9% sure this is one of the Fall 2016 Ddos attack transactions. My guess is that this is deeply related to that. I note that any transaction sent to |
I'm running parity in Docker and noticed that once I removed the parameters "--ws-interface=0.0.0.0", "--ws-origins=all", I no longer see the problem. |
Hey guys I am seeing the same error. Any progress ??
This didn't work for me. |
@tjayrush Is the C++ code putting heavy load on the node? Or is a single invocation from your app enough to break the IO pipe? |
I will try today and I'll let you know. @dvdplm Are you sure this will fix the issue related to high RPC workloads? |
No. I am sure it improves the resource usage but just how much an improvement it turns out to be is hard to tell. |
Hey @dvdplm I think I understood my problem. I will briefly explain here, maybe is helpful for the others. I have a client firing hundreds of tx/s toward my 4-nodes Parity network running Aura consensus. Such client does not wait for responses from one request to another and after x minutes it closes the connections. Parity-side the server keeps the transactions in a queue and it answers the client as soon as possible. However with high loads, the response time increases. In this case, may happen that the connection is closed, client-side, before Parity can process all the queued transactions. This causes the BrokenPipe error. Now my question is: Will those transactions still in the queue being processed by Parity ?? |
@deanstef parity’s http impl is no worse or better than other http servers and if the client disconnects prematurely without giving notice, |
I have the same problem when load test parity.v 2.6.8 beta |
Still having the same issue on version OpenEthereum/v3.0.0-stable-fdf5f67-20200511/x86_64-unknown-linux-gnu/rustc1.43.1. Had it very often on 2.5 and 2.7. Decided to upgrade to OpenEthereum 3.0 but still same issue. It makes the node unusable for some time before it starts syncing again. 2020-05-25 12:51:34 UTC 17/25 peers 6 MiB chain 1 GiB db 286 KiB queue 12 MiB sync RPC: 0 conn, 4 req/s, 72 µs Any news on fixing it? |
Just met the same issue.. |
Before filing a new issue, please provide the following information.
Mac
One line installer
yes
Ethereum mainnet
yes (and rebooted machine)
Your issue description goes here below. Try to include actual vs. expected behavior and steps to reproduce the issue.
I run parity with
--tracing on
and have been for many months/years with nearly zero problems. This morning, I was running three pretty intensive processes against the Parity RPC (basically using CURL to retrieve transactions and blocks in large quantities) and I got the following error messages. When I tried to quit (after shutting down my processes), it took a very long time to close Parity, but it eventually quit. I then rebooted (without trying to restart Parity) and restarted Parity, and it now seems to be working fine. Thought I'd report the issue because I was advised to do so in the Gitter channel. Cheers.The text was updated successfully, but these errors were encountered: