Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestions for saving disk space for your full node #686

Closed
panalyticsBsc opened this issue Dec 23, 2021 · 2 comments
Closed

Suggestions for saving disk space for your full node #686

panalyticsBsc opened this issue Dec 23, 2021 · 2 comments

Comments

@panalyticsBsc
Copy link

panalyticsBsc commented Dec 23, 2021

It is becoming more and more difficult to run your own full node due to an ever increasing demand on system resources. Especially if it is out of scope to run several full nodes and maintain and update them almost daily. Here I want to share with you our setup and tips for running a full node, without delay's or prune stops.

Hardware

Really important, if you do not have the proper hardware you will never be able to sync. You need atleast:

Storage: 2TB+ NVME SSD PCIe v3.0+ 8k IOPS, 250MB/S
Memory: 64GB+
Cores: 8+
Internet: 100/100 Mbs

OS + binaries

The BSC releases binaries for Linux, Mac and Windows. You can chose an OS of your liking, we prefer a Linux based OS and for a guide on setting it up on Linux you can check: Run full node with geth

Independent of the OS you prefer make sure that you are always running the latest (stable) version, currently v1.1.7.

Performance tuning

--diffsync

I assume most of you are aware with the diffsync protocol. It has be rolled out during release v1.1.5 and it improves syncing speed by 60%-70%. It can be enabled by adding --diffsync in the staring command

--datadir.ancient

At this moment geth/chaindata/ancient consumes almost 600 GB, which does not need to be on our precious SSD. If geth is running stop it and move this dir to another (preferably) slower disk. Once the ancient data is moved start geth with your normal command and add --datadir.anciet DATA_DIR_ANCIENT.

--txlookuplimit & debug.chaindbCompact()

txlookuplimit = Number of recent blocks to maintain transactions index for
I have seen a look of start commands where the txlookuplimit is set to 0. When I ask why that is the main response I get is, well that is what everyone seems to be using. Ask yourself, how often do I query for older blocks? If the answer is never (or close to it) consider setting this to a value that better matches your use case.
For example: if new transactions are only relevant for you consider setting it at 50.000. You will have blocks of up to 2 days old indexed and the older ones are removed.

After setting the txlookuplimit to some value I recommend running the debug.chaindbCompact() command in your geth interface (geth attach geth.ipc) to compact the chaindb. In our case we saved about 50GB with this.

Ending note

At this moment 23-12-2021 our BSC takes:
~ 1 TB mainnet
~ 600 GB ancient

and has been running stable for almost a month without stopping to prune.

Happy holidays!

@forcodedancing
Copy link
Contributor

Brilliant insights and great suggestions.
@panalyticsBsc Thanks a lot for your sharing. It will benefit others who also running full nodes.

@zzzckck
Copy link
Collaborator

zzzckck commented Dec 13, 2023

thx, some flags have changed, close this one.

@zzzckck zzzckck closed this as completed Dec 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants