We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm trying to get a pruned node on the eth mainnet running.
docker compose configuration:
version: '3.5' # Basic erigon's service x-erigon-service: &default-erigon-service image: thorax/erigon:v2.57.0 pid: service:erigon volumes_from: [ erigon ] restart: unless-stopped mem_swappiness: 0 user: ${DOCKER_UID:-1000}:${DOCKER_GID:-1000} services: erigon: image: thorax/erigon:v2.57.0 build: args: UID: ${DOCKER_UID:-1000} GID: ${DOCKER_GID:-1000} context: . command: | ${ERIGON_FLAGS-} --log.console.verbosity=3 --db.pagesize=16k --prune=hrtc --prune.h.older=90000 --prune.r.older=90000 --prune.t.older=90000 --prune.c.older=90000 -- --private.api.addr=0.0.0.0:9090 --datadir=/home/erigon/.local/share/erigon --authrpc.addr=0.0.0.0 --authrpc.port=8551 --torrent.port=42070 --authrpc.jwtsecret=/home/erigon/.local/share/erigon/jwt.hex --port=40303 --p2p.allowed-ports=40303,40304,40305,40306,40307 ports: [ "8552:8551","40303:40303/tcp", "40303:40303/udp","40304:40304/tcp", "40304:40304/udp", "42070:42070/tcp", "42070:42070/udp","6060:6060","6061:6061"] volumes: # It's ok to mount sub-dirs of "datadir" to different drives - ${XDG_DATA_HOME:-/opt/erigon/data-eth}/erigon:/home/erigon/.local/share/erigon restart: unless-stopped mem_swappiness: 0 user: ${DOCKER_UID:-1000}:${DOCKER_GID:-1000} rpcdaemon: <<: *default-erigon-service entrypoint: rpcdaemon command: | ${RPCDAEMON_FLAGS-} --http.addr=0.0.0.0 --log.console.verbosity=4 --http.vhosts='*' --http.corsdomain='*' --http.api="trace,debug,eth,erigon,web3,net,txpool" --ws --private.api.addr=erigon:9090 --txpool.api.addr=erigon:9090 --datadir=/home/erigon/.local/share/erigon ports: [ "8546:8545" ] prysm: image: gcr.io/prysmaticlabs/prysm/beacon-chain:v4.2.0 command: | --checkpoint-sync-url=https://mainnet-checkpoint-sync.stakely.io --accept-terms-of-use --datadir=/data --jwt-secret=/home/erigon/.local/share/erigon/jwt.hex --rpc-host=0.0.0.0 --grpc-gateway-host=0.0.0.0 --execution-endpoint=http://192.168.224.2:8551 ports: - "4000:4000" - "13000:13000" - "12000:12000/udp" volumes: - ${XDG_DATA_HOME:-/opt/erigon/data-eth}/prysm:/data:rw - ${XDG_DATA_HOME:-/opt/erigon/data-eth}/erigon:/home/erigon/.local/share/erigon:ro
The node syncs up fine but the datadir continually grows toup to 3.7T until i run out of disk space:
docker exec -it docker_erigon_1 du -h -d 2 /home/erigon/.local/share/erigon/ 68.9M /home/erigon/.local/share/erigon/logs 3.0T /home/erigon/.local/share/erigon/chaindata 0 /home/erigon/.local/share/erigon/temp 0 /home/erigon/.local/share/erigon/snapshots/idx 0 /home/erigon/.local/share/erigon/snapshots/history 0 /home/erigon/.local/share/erigon/snapshots/domain 0 /home/erigon/.local/share/erigon/snapshots/accessor 618.7G /home/erigon/.local/share/erigon/snapshots 21.6M /home/erigon/.local/share/erigon/downloader 49.0M /home/erigon/.local/share/erigon/txpool 9.1M /home/erigon/.local/share/erigon/nodes/eth67 9.2M /home/erigon/.local/share/erigon/nodes/eth68 18.3M /home/erigon/.local/share/erigon/nodes 0 /home/erigon/.local/share/erigon/caplin/history 0 /home/erigon/.local/share/erigon/caplin/indexing 0 /home/erigon/.local/share/erigon/caplin 3.6T /home/erigon/.local/share/erigon/
log output of node startup ( no invalid prune warnings ):
[INFO] [02-18|12:31:44.160] logging to file system log dir=/home/erigon/.local/share/erigon/logs file prefix=erigon log level=info json=false [INFO] [02-18|12:31:44.160] Build info git_branch=heads/v2.57.0 git_tag=v2.57.0-dirty git_commit=4f6eda7694b4f33d2f907b40088e3a83192b5c2c [INFO] [02-18|12:31:44.160] Starting Erigon on Ethereum mainnet... [INFO] [02-18|12:31:44.161] Maximum peer count ETH=100 total=100 [INFO] [02-18|12:31:44.161] starting HTTP APIs port=8545 APIs=eth,erigon,engine [INFO] [02-18|12:31:44.162] torrent verbosity level=WRN [INFO] [02-18|12:31:44.162] [torrent] Public IP ip=XXXXX [INFO] [02-18|12:31:44.163] Set global gas cap cap=50000000 [INFO] [02-18|12:31:44.193] [Downloader] Running with ipv6-enabled=true ipv4-enabled=true download.rate=500mb upload.rate=10mb [INFO] [02-18|12:31:44.193] Opening Database label=chaindata path=/home/erigon/.local/share/erigon/chaindata [INFO] [02-18|12:31:44.198] [db] open lable=chaindata sizeLimit=12TB pageSize=16384 [INFO] [02-18|12:31:44.201] Initialised chain configuration config="{ChainID: 1, Homestead: 1150000, DAO: 1920000, Tangerine Whistle: 2463000, Spurious Dragon: 2675000, Byzantium: 4370000, Constantinople: 7280000, Petersburg: 7280000, Istanbul: 9069000, Muir Glacier: 9200000, Berlin: 12244000, London: 12965000, Arrow Glacier: 13773000, Gray Glacier: 15050000, Terminal Total Difficulty: 58750000000000000000000, Merge Netsplit: <nil>, Shanghai: 1681338455, Cancun: <nil>, Prague: <nil>, Engine: ethash}" genesis=0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3 [INFO] [02-18|12:32:09.955] Initialising Ethereum protocol network=1 [INFO] [02-18|12:32:09.955] Disk storage enabled for ethash DAGs dir=/home/erigon/.local/share/erigon/ethash-dags count=2 [INFO] [02-18|12:32:11.713] Starting private RPC server on=0.0.0.0:9090 [INFO] [02-18|12:32:11.713] new subscription to logs established [INFO] [02-18|12:32:11.714] rpc filters: subscribing to Erigon events [INFO] [02-18|12:32:11.714] New txs subscriber joined [INFO] [02-18|12:32:11.714] new subscription to newHeaders established [INFO] [02-18|12:32:11.715] Reading JWT secret path=/home/erigon/.local/share/erigon/jwt.hex [INFO] [02-18|12:32:11.716] HTTP endpoint opened for Engine API url=[::]:8551 ws=true ws.compression=true [INFO] [02-18|12:32:11.716] [txpool] Started [INFO] [02-18|12:32:11.716] JsonRpc endpoint opened ws=false ws.compression=true grpc=false http.url=127.0.0.1:8545 [INFO] [02-18|12:32:11.721] Started P2P networking version=67 self=enode://cfec1d7e8aa1b1dbfefd45288f9cb3136accc074263dcaab35285ddb6dc2b3c5ad4e3360d0a4c5055e5b71dd3aec8d2abac45c4e3f545b89a9a75758edd40820@XXXXX:40304 name=erigon/v2.57.0-4f6eda76/linux-amd64/go1.20.12 [INFO] [02-18|12:32:11.723] Started P2P networking version=68 self=enode://cfec1d7e8aa1b1dbfefd45288f9cb3136accc074263dcaab35285ddb6dc2b3c5ad4e3360d0a4c5055e5b71dd3aec8d2abac45c4e3f545b89a9a757sdfdf40820@XXXXX:40303 name=erigon/v2.57.0-4f6eda76/linux-amd64/go1.20.12 [INFO] [02-18|12:32:11.724] [1/12 Snapshots] Fetching torrent files metadata [INFO] [02-18|12:32:31.923] [1/12 Snapshots] download finished time=20.000576902s [INFO] [02-18|12:32:33.859] [snapshots:download] Blocks Stat blocks=19164k indices=19164k alloc=2.5GB sys=5.0GB [INFO] [02-18|12:32:33.865] [snapshots] Prune Blocks to=19164000 limit=100
based on a few tickets i found the datadir should be around 900 GB for eth mainnet?
Are there any obvious errors regarding pruning in my configuration?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
hi @nikolinsko
Interesting, I just checked my mainnet node (no pruning) and its about 2.6TB:
2.1T /tmp/.ethereum/chaindata 4.0K /tmp/.ethereum/jwt.hex 4.0K /tmp/.ethereum/nodekey 81M /tmp/.ethereum/nodes 536G /tmp/.ethereum/snapshots 12K /tmp/.ethereum/temp 705M /tmp/.ethereum/txpool
Sorry, something went wrong.
synced up again with prune=hrtc and no additional prune flags and it seems the chaindata takes up around ~ 1T this time
626G /opt/erigon/data-eth/erigon/snapshots 25M /opt/erigon/data-eth/erigon/nodes 18M /opt/erigon/data-eth/erigon/logs 958G /opt/erigon/data-eth/erigon/chaindata 1.1M /opt/erigon/data-eth/erigon/temp 22M /opt/erigon/data-eth/erigon/downloader 97M /opt/erigon/data-eth/erigon/txpool 0 /opt/erigon/data-eth/erigon/caplin 1.6T /opt/erigon/data-eth/erigon 0 /opt/erigon/data-eth/prysm/blobs 3.4G /opt/erigon/data-eth/prysm/beaconchaindata 3.4G /opt/erigon/data-eth/prysm 1.6T /opt/erigon/data-eth/
not sure what caused the previous issues.
Closing this :)
next release will include feature which will reduce 2x size of pruned node #9123 (it will apply only if re-sync)
No branches or pull requests
I'm trying to get a pruned node on the eth mainnet running.
docker compose configuration:
The node syncs up fine but the datadir continually grows toup to 3.7T until i run out of disk space:
log output of node startup ( no invalid prune warnings ):
based on a few tickets i found the datadir should be around 900 GB for eth mainnet?
Are there any obvious errors regarding pruning in my configuration?
Thanks in advance!
The text was updated successfully, but these errors were encountered: