Skip to content

Commit

Permalink
docs(book): pruning distance 128 -> 10064, update node size numbers (#…
Browse files Browse the repository at this point in the history
  • Loading branch information
shekhirin authored Oct 22, 2023
1 parent c74abbc commit 82bffbf
Show file tree
Hide file tree
Showing 4 changed files with 92 additions and 91 deletions.
2 changes: 1 addition & 1 deletion book/cli/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@ Dev testnet:
Pruning:
--full
Run full node. Only the most recent 128 block states are stored. This flag takes priority over pruning configuration in reth.toml
Run full node. Only the most recent 10064 block states are stored. This flag takes priority over pruning configuration in reth.toml
Logging:
--log.directory <PATH>
Expand Down
6 changes: 3 additions & 3 deletions book/installation/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The most important requirement is by far the disk, whereas CPU and RAM requireme

| | Archive Node | Full Node |
|-----------|---------------------------------------|-------------------------------------|
| Disk | At least 2.1TB (TLC NVMe recommended) | At least 1TB (TLC NVMe recommended) |
| Disk | At least 2.2TB (TLC NVMe recommended) | At least 1TB (TLC NVMe recommended) |
| Memory | 8GB+ | 8GB+ |
| CPU | Higher clock speed over core count | Higher clock speeds over core count |
| Bandwidth | Stable 24Mbps+ | Stable 24Mbps+ |
Expand All @@ -34,9 +34,9 @@ Prior to purchasing an NVMe drive, it is advisable to research and determine whe
### Disk

There are multiple types of disks to sync Reth, with varying size requirements, depending on the syncing mode.
As of August 2023 at block number 17.9M:
As of October 2023 at block number 18.3M:

* Archive Node: At least 2.1TB is required
* Archive Node: At least 2.2TB is required
* Full Node: At least 1TB is required

NVMe drives are recommended for the best performance, with SSDs being a cheaper alternative. HDDs are the cheapest option, but they will take the longest to sync, and are not recommended.
Expand Down
9 changes: 5 additions & 4 deletions book/run/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,8 @@ No pruning, run as archive node.

This configuration will:
- Run pruning every 5 blocks
- Continuously prune all transaction senders, account history and storage history before the block `head-128`, i.e. keep the data for the last 129 blocks
- Continuously prune all transaction senders, account history and storage history before the block `head-100_000`,
i.e. keep the data for the last `100_000` blocks
- Prune all receipts before the block 1920000, i.e. keep receipts from the block 1920000

```toml
Expand All @@ -356,7 +357,7 @@ block_interval = 5

[prune.parts]
# Sender Recovery pruning configuration
sender_recovery = { distance = 128 } # Prune all transaction senders before the block `head-128`, i.e. keep transaction senders for the last 129 blocks
sender_recovery = { distance = 100_000 } # Prune all transaction senders before the block `head-128`, i.e. keep transaction senders for the last 129 blocks

# Transaction Lookup pruning configuration
transaction_lookup = "full" # Prune all TxNumber => TxHash mappings
Expand All @@ -365,10 +366,10 @@ transaction_lookup = "full" # Prune all TxNumber => TxHash mappings
receipts = { before = 1920000 } # Prune all receipts from transactions before the block 1920000, i.e. keep receipts from the block 1920000

# Account History pruning configuration
account_history = { distance = 128 } # Prune all historical account states before the block `head-128`
account_history = { distance = 100_000 } # Prune all historical account states before the block `head-128`

# Storage History pruning configuration
storage_history = { distance = 128 } # Prune all historical storage states before the block `head-128`
storage_history = { distance = 100_000 } # Prune all historical storage states before the block `head-128`
```

We can also prune receipts more granular, using the logs filtering:
Expand Down
Loading

0 comments on commit 82bffbf

Please sign in to comment.