-
Notifications
You must be signed in to change notification settings - Fork 20.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Geth 1.7.0: fatal error: runtime: out of memory #15157
Comments
Been having this issue for almost a few weeks now. I've tried setting the cache value and using lightkdf and yet when it gets to block ~3.7M it crashes. Using Supervisord does help keep the thing running, but it's still constantly crashing. The only way I've gotten geth to run easily has been in light mode, which of course limits my ability to do anything. This really needs to be fixed. Using DigitalOcean Ubuntu 16 x64, 2GB RAM, 40GB Disk (yes not ideal, but we need this for testing). |
Exact same issue. Can't even get it to fully sync. Using DO Ubuntu 16.04 4GB RAM. Slowly eats up memory then crashes, same error message. |
@Inigovd Try using Supervisor. Mine eventually did sync, but only because that program kept kicking it along. Also, try creating a memory swap file. That also helped me a little. |
Already have swap enabled. Will try with supervisor... Does yours not crash anymore after syncing? I want to build a commercial application using geth for automated transactions, but if it keeps crashing, I'll look into other options... |
I just checked mine, its still running. I hear Parity might be better, but I've yet to try it myself (plan B if this thing causes more issues). Depending on your apps requirements and expected traffic, check out Infura's api. |
Coinbase also seeing this issue, it's causing a lot of geth instance churn right now. |
OS: Windows 7 Ultimate Experienced same behavior when first syncing a new wallet. I filed it as an issue in Mist repository (see #3214). This issue contains attached node logs with panic stack that includes references to the ethereum packages that called into the golang runtime so it might provide further insight into the problem's cause. Also, after "kicking it" three times, the sync completed. However, running geth constantly for 3-4 days after the sync resulted in same memory depletion issue. If you use the computer for tasks other than running geth, it affects their performance too, especially if the OS utilizes virtual memory pages, which most do. I've disabled support for virtual memory on my Windows machine, so it doesn't suffer from the long slow death due to page thrashing that others have reported as block processing slows to a snails pace. Finally, many issues similar to this one appear in the Mist repository which leads me to believe its a pervasive one that's under reported due to differences in how individuals find similar posts that reflect the behavior they are experiencing. |
I had to 'kick it' as well a few times, just keep rebooting until it was fully synced. After that, I didn't have any issues. Memory usages stable now for 7+ days. Note that I am using swap files as described here https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04, doing this made sure the instance didn't crash On linux, I have the following cronjob running to make sure geth keeps running: Flock makes sure only one instance runs. Make sure that geth is started by this cronjob. Should work well to keep it running so it fully syncs, and keep it running after as well. |
Or even better, use systemd to keep it running while logging everything. Adjust the following with your username and paths and save it as /etc/systemd/system/geth.service:
And run:
|
Is this solved in current geth? I think I'm experiencing this in Quorum (i.e. geth 1.7.2). |
I don't believe so. Here is a snippet of the output after I run
Windows 10, 8GB RAM
|
Seeing this as well in the latest version.
|
Same for me! |
Make sure you have swap file allocated. Geth seems to be memory hungry and it eats all the memory available on your machine
|
Enable swap on 8Gb RAM machine seems to resolve this problem for me. (+ 8Gb swap)
|
@Duiesel I still had some crashes with 16GB, no swap, nothing else running. |
@Duiesel Try to create 16 GB swap file like I explain above and don't use --cache option. |
@naure create swap file and try again |
Still seeing it here on ubuntu 14.04.3 Distributor ID: Ubuntu Running in VM. I'm a developer and willing to try to help out. Can anyone suggest how best I could help debug/fix this? |
@poleguy How much memory do you have available there? Have you created swap file? |
@stevenroose I am also facing this issue in Quorum.How did you resolve the |
Here's a more detailed explanation on setting up the swap file |
@mariam-crissi We solved it by creating a big swap file. |
I am running in a VM, and it did have too little memory. I just tried with
more memory, and it doesn't crash immediately now. It would be nice if geth
the would check available memory and issue a warning below some level.
It does seem memory usage is excessive, and everyone would benefit if we
could identify and fix leaks and inefficiencies.
I imagine many new users like me are giving up due to the slow seemingly
buggy behavior. I'm not impressed.
If you have to throw top of the line hardware at it, it seems ethereum
needs to rethink some things. That shouldn't be necessary for a network
aspiring to be a distributed computing platform.
Growing pains I hope.
…On Feb 4, 2018 6:13 PM, "Max Zet" ***@***.***> wrote:
@poleguy <https://github.com/poleguy> How much memory do you have
available there? Have you created swap file?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#15157 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMzQf3AbLC66j6-_s47KFpG7FWqbbfpOks5tRkeygaJpZM4PaZ0X>
.
|
I'm receiving the same error, i've been trying to solo mine for seems like 4 days now, and i end up giving up and going to nanopool. I really just want to solo mine. So I finally figured out Geth and Ethminer need to run at the same time. However, seems i cannot sync, I'm running 20GB ram on my machine, and an AMD Radeon 570 When i run I get to the point it starts block sync, and i watched videos that said wait till it said Start Mining to launch the ethminer.exe -G. however after a few minutes it just spams a ton of crap, first few lines are Out Of memory. |
So it runs a while at about 3GB which is cool and fine.. then all the sudden it just climbs really fast.... wondering how to fix this |
Going to apologize, so i noticed above some are using a much newer version of geth, than i was... Really wish this was easier to research/follow for setting up solo mining.... |
Creating a giant swap file is the key. Thanks |
Got it all figured out, was using a horribly old version. Updated to newest release, took 2 days but fully synced and now running |
The same to me. Tried with geth 1.8.2 on Win 10 64bit, 64GB RAM using a 128 VM paging file. C:\Program Files (x86)\Geth>geth --fast --cache=1024 INFO [03-19|21:48:46] Maximum peer count ETH=25 LES=0 total=25 |
@Lyghtning which versions you used (old, new)? Thx |
I have hit the same problem with geth 1.8.1 and 1.8.2 on Windows 10 with 8gb ram. I notice the above actions are all designed to increase available RAM, but it looks like a memory leakage was introduced by code change in an earlier release., because Windows Task Manager showed 90% RAM used when ~6Gb was taken by the running processes (that is, 75%). If so, that means increasing memory will not solve the problem, just delay the occurrence of it. Memory leakage occurs when the code requests additional memory, and then doesn't free it. This can happen when an application is written for one platform, then ported onto another, because there are a number of platforms which do automatic garbage collection, so don't need the application code to do an explicit action to free memory, but I'm guessing Windows doesn't do that. If so, it would need the developers of geth to have a look at the changes introduced into the first version where this problem was noticed, and see if there needs to be explicit freeing of dynamically allocated memory.. I'm new to all this, so don't know how to contact them - if someone here does, please pass these thoughts on. Also, for those who are new and trying to mine, but not succeeding, I found that when I wiped the files and started again with geth 1.8.1, it synchronised in about 3 days, started mining, and confirmed mining one block, before the memory error caused it to stop. When I restarted geth, it hit the memory error every time, which may be because the "fast" sync doesn't happen with an existing installation. I may not be investigating this much further, as I am looking into this because I would like to create my own cryptocurrency based on the Ethereum package, but for those of you who want to mine for Ether, and tripping over this problem, you may want to consider a different approach. I don't know if it will work, but if you are getting the memory error, it might be worth renaming your Ethereum directory to Ethereum-old, then re-installing everything and restarting geth and ethminer. geth (from 1.8 on) will rebuild from scratch using the fast method, so will take 2-3 days, and then mining will start. It is likely to crash a couple of days later with the memory problem, but may have done some mining by then. If so, you might be best to do the same again - rename, re-install and restart. I expect you will get asked to confirm that you want to replace existing files. If so, respond yes, because what you are doing is replacing existing files with the blank version, so geth will use the fast approach. If it all goes horribly wrong, or you're not happy with the new setup, you can re-instate the old by renaming Ethereum-old to Ethereum, replacing what's there. You can then restart geth , which will pick up from where the last update left off. Hope this helps some of you out there :) |
I forgot to mention that I searched for this problem and this thread came up on top - if there is a more current thread elsewhere please let me know |
I got this problem too. Geth:1.8.2-stable 8G memory |
Hi there, |
@nastasache - I think you're right about your problem, as 64Gb RAM should last a while, and there are both 32 and 64 bit downloads available. However I'm running 64 bit, as I allocated 5Gb to cache and it used it okay (and ran out sooner). I've stopped trying to mine now as there is a new limit that has kicked in - my graphics card has 2Gb and latest processing needs more than that, so it can't mine anyway. |
@TUTUBIG - I'm not aware of a solution - adding more RAM avoids the problem for a time. |
amazon ecs with 4core 8GB RAM fatal error: runtime: out of memory runtime stack: |
@zzd1990421 give the thing an 8GiB swap file. |
@stevenroose you can refer to #16728 |
Like most people on this thread I decided to upgrade memory. I went from 8Gb to 16Gb and when I run Geth I use -6144 as the storage limit, and it works. So with this 6Gb limit, Windows Task Manager shows it actually uses about 9Gb, but it grows very slowly after 9Gb (currently 9.9Gb on my system, which is doing an update after 2 weeks offline) - but it's still working! So - a memory upgrade is likely to avoid the problem for you. The data suggest there is a memory leakage in a very small area, and previous posts indicate it was introduced around the 1.6 release (I think) - it would be good if the authors of changes at that time could have a look at the code to see what might be causing this. |
I'm closing this issue as it relates to an old version of geth and it seems like 1.8.10 has addressed this (and many other) issues. |
I updated to 1.8.10 and the out of memory error is gone. However, I now get this: "fatal error: runtime: failed to commit pages", which seems just a different error message for a similar error. Geth keeps crashing on my Windows Server 2016 (hosted on Azure) while running in fast-sync mode. But looking at the replies above, Linux has similar problems. |
I am facing the same error. |
Hi, I have tried couple of times and realised that it always fail around the same spot : |
same error, Ubuntu 16 with command: geth --datadir "/root/eth" --syncmode "full" --rpc --rpcaddr "0.0.0.0" --rpcapi eth,web3,personal,db,net --ws --wsaddr "0.0.0.0" console still out of memory |
Most memory-related problems seem to occur when this is in combination: |
Can you try start geth without db RPC api? My memory issues were solved after doing that. Isolated it after having the cors domain set to localhost to make sure it was not about brute force attacks. |
Cors domain settings won't protect you from bruteforce-attacks from the internet, only protect you from pages you visit with your own browser from being able to read data from the api (they can still write/POST though) |
You're right, that was an oversight, the thing is that disable the db rpc solved the issue. Not sure why an attacker would use that particular rpc api, so I suspect the out of memory problem is not related to an attack. I still have to test by closing the port on the firewall. |
removing db from RPC API didnt resolve the problem for us. |
I think is about 32bit vs 64bit. I found same geth.exe version on ProgramFiles and works. Used before from ProgramFiles(x86).On Mar 28, 2018 5:29 PM, alvin <notifications@github.com> wrote:I got this problem too. Geth:1.8.2-stable 8G memory
it could be solved by creating a swap file,but is there some way to limit the memory Geth used ? changing --cache seems nothing happend
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
|
System information
Geth version:
OS & Version: Ubuntu 14.04.5 LTS
Expected behaviour
Geth 1.6.6 never used as much memory
Actual behaviour
Geth 1.7.0 eats away all the memory and dies
Steps to reproduce the behaviour
This has been happening when doing a fresh sync on 1.7.0 in a box with 8GB RAM
Backtrace
Full backtrace is more than 2,000 lines, showing the first few ones:
The black vertical line shows the moment I shut down Geth 1.6.6 and started a fresh sync for 1.7.0.
This is CPU:
This is memory:
The graphs don't have full resolution, but the dip in memory consumption is geth restarting after the box becomes unusable. This never happened on the same box with prior versions of geth.
The text was updated successfully, but these errors were encountered: