-
Notifications
You must be signed in to change notification settings - Fork 20.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Geth 1.8.15 - Memory Leak? #17646
Comments
My testing indicates that 1.8.13 is stable and 1.8.15 has some sort of a problem. |
@pschlump thank you for the tip. Few questions for you:
Thanks. |
I have a private test net - the 2 machines with Geth running on it have 96GB of memory, quad Xeon - 2*2TB hard drives. They are isolated from main-net with a hardware fire wall. Purely test systems. My process for down-grade of the test systems - I used docker to bring up 1.8.13 nodes - one on each system and let them sync. Then I just shutdown the 1.8.15-unstable. Then I brought up 2 new nodes with 1.8.13 and shutdown the docker containers. I can confirm that the 1.8.13 version is stable - and - not leaking. When I bring up a 1.8.15 in docker it grows until I kill it. |
I have not tried 1.8.14 - I will try that today in a docker container. |
I have run our distributed key generation application (Keep/thesis*) on 1.8.13 - and geth grows by 1.1mb of memory then goes back down in a few minutes (good behavior). On 1.8.15 it grew by 2.3 GB! I am setting up a 1.8.14 version now. |
My tests indicate that 1.8.14 is ok - the problem is with .15. |
On a test-network? Behind a firewall? Why?
…On Wed, Sep 12, 2018 at 5:03 PM a e r t h ***@***.***> wrote:
--rpc --rpcaddr=0.0.0.0 --rpcapi='db,eth,net,web3,personal,admin' should
be illegal or something
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#17646 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAhMQZB5kZ9dO7zNN9KozLKeOgvbnJHBks5uaZLMgaJpZM4WkRPX>
.
--
Philip Schlump
|
Thank you @pschlump for the pointers. I will give it a try. |
On those logfiles, the first one had 10 simultaneous You could try using
go-ethereum/cmd/utils/flags.go Line 181 in 0e32989
I have no idea why it would differ between versions though. But I guarantee that |
Oh, and if it wasn't you calling |
I guess your firewall is not properly configured, so this ticket demonstrates a pretty good reason :) |
I think I was the source of the unlocks on my system. I have looked in my logs from my firewall and I see no evidence that any unexpected outside activity took place. I am now looking into the possibility that somebody unwanted has penetrated our security and has malicious code running inside our firewall. I don't see any unexpected pending transactions and I am monitoring once a second for pending transactions. I take your comment very seriously. |
I managed to resolve this problem by disabling the 'db' rpc API. Not sure if it's the same root cause as you guys, but the behavior seems similar to mine. |
Im running on 1.8.17-stable, have same environment and same problem. I dont have edit: I downgraded geth to 1.8.14 as @pschlump mentioned. I see no problem after synced. 👍 |
This is already solved, I'm closing |
System information
Geth version:
OS & Version:
Expected behaviour
Geth runs smoothly with normal and stable RAM usage.
Actual behaviour
It started normal around 30% RAM usage. Slowly, it jumped high until it crashed around 90% RAM usage.
Steps to reproduce the behaviour
Command:
FYI, I'm running 2 nodes private blockchain. Both machines are on same specs as per above. Each node has 50GB EBS volume and 4GB RAM. They are 't3.medium' type EC2 on AWS.
I didn't do anything to the node during the recording below. No extra load was sent to the node i.e. HTTP RPC call,
geth attach
& etc. Just mining, syncing with the second node andhtop
on another terminal.I did try running the same command above in background mode and same issue happened. I noticed that Geth stopped after ~10 minutes. My SSH session stucked when it was at the peak of RAM usage.
Backtrace
Does this issue related to #16728 and #16859?
Can someone suggest the most stable version for me?
The text was updated successfully, but these errors were encountered: