diff --git a/docs/guides/external-node/00_quick_start.md b/docs/guides/external-node/00_quick_start.md index 63540b9ada20..67a1b89eef51 100644 --- a/docs/guides/external-node/00_quick_start.md +++ b/docs/guides/external-node/00_quick_start.md @@ -51,12 +51,14 @@ The HTTP JSON-RPC API can be accessed on port `3060` and WebSocket API can be ac > [!NOTE] > -> Those are requirements for nodes that use snapshots recovery and history pruning (the default for docker-compose setup). +> Those are requirements for nodes that use snapshots recovery and history pruning (the default for docker-compose +> setup). > -> For requirements for nodes running from DB dump see the [running](03_running.md) section. DB dumps are a way to start ZKsync node with full historical transactions history. +> For requirements for nodes running from DB dump see the [running](03_running.md) section. DB dumps are a way to start +> ZKsync node with full historical transactions history. > -> For nodes with pruning disabled, expect the storage requirements on mainnet to grow at 1TB per month. If you want to stop historical DB -> pruning you can read more about this in the [pruning](08_pruning.md) section. +> For nodes with pruning disabled, expect the storage requirements on mainnet to grow at 1TB per month. If you want to +> stop historical DB pruning you can read more about this in the [pruning](08_pruning.md) section. - 32 GB of RAM and a relatively modern CPU - 50 GB of storage for testnet nodes diff --git a/docs/guides/external-node/01_intro.md b/docs/guides/external-node/01_intro.md index 0435ea27cd4d..10fc55acac21 100644 --- a/docs/guides/external-node/01_intro.md +++ b/docs/guides/external-node/01_intro.md @@ -10,9 +10,9 @@ This documentation explains the basics of the ZKsync Node. ## What is the ZKsync node The ZKsync node is a read-replica of the main (centralized) node that can be run by external parties. It functions by -receiving blocks from the ZKsync network and re-applying transactions locally, starting from the genesis block. The ZKsync node -shares most of its codebase with the main node. Consequently, when it re-applies transactions, it does so exactly as the -main node did in the past. +receiving blocks from the ZKsync network and re-applying transactions locally, starting from the genesis block. The +ZKsync node shares most of its codebase with the main node. Consequently, when it re-applies transactions, it does so +exactly as the main node did in the past. **It has two modes of initialization:** diff --git a/docs/guides/external-node/04_observability.md b/docs/guides/external-node/04_observability.md index 0372354c6cf2..05b39b74c5d2 100644 --- a/docs/guides/external-node/04_observability.md +++ b/docs/guides/external-node/04_observability.md @@ -38,6 +38,5 @@ memory leaking. | `api_web3_call` | Histogram | `method` | Duration of Web3 API calls | | `sql_connection_acquire` | Histogram | - | Time to get an SQL connection from the connection pool | - Metrics can be used to detect anomalies in configuration, which is described in more detail in the [next section](05_troubleshooting.md). diff --git a/docs/guides/external-node/07_snapshots_recovery.md b/docs/guides/external-node/07_snapshots_recovery.md index ce874b53e624..0053717af063 100644 --- a/docs/guides/external-node/07_snapshots_recovery.md +++ b/docs/guides/external-node/07_snapshots_recovery.md @@ -2,8 +2,8 @@ Instead of initializing a node using a Postgres dump, it's possible to configure a node to recover from a protocol-level snapshot. This process is much faster and requires much less storage. Postgres database of a mainnet node recovered from -a snapshot is less than 500GB. Note that without [pruning](08_pruning.md) enabled, the node state will continuously -grow at a rate about 15GB per day. +a snapshot is less than 500GB. Note that without [pruning](08_pruning.md) enabled, the node state will continuously grow +at a rate about 15GB per day. ## How it works @@ -94,6 +94,6 @@ An example of snapshot recovery logs during the first node start: Recovery logic also exports some metrics, the main of which are as follows: -| Metric name | Type | Labels | Description | -| ------------------------------------------------------- | --------- | ------------ | --------------------------------------------------------------------- | -| `snapshots_applier_storage_logs_chunks_left_to_process` | Gauge | - | Number of storage log chunks left to process during Postgres recovery | +| Metric name | Type | Labels | Description | +| ------------------------------------------------------- | ----- | ------ | --------------------------------------------------------------------- | +| `snapshots_applier_storage_logs_chunks_left_to_process` | Gauge | - | Number of storage log chunks left to process during Postgres recovery | diff --git a/zk_toolbox/crates/zk_inception/src/commands/update.rs b/zk_toolbox/crates/zk_inception/src/commands/update.rs index c140c3a4e9c8..5cb7208ffd0c 100644 --- a/zk_toolbox/crates/zk_inception/src/commands/update.rs +++ b/zk_toolbox/crates/zk_inception/src/commands/update.rs @@ -2,26 +2,31 @@ use std::path::Path; use anyhow::{Context, Ok}; use common::{ + db::migrate_db, git, logger, spinner::Spinner, yaml::{merge_yaml, ConfigDiff}, }; use config::{ - ChainConfig, EcosystemConfig, CONTRACTS_FILE, EN_CONFIG_FILE, ERA_OBSERBAVILITY_DIR, - GENERAL_FILE, GENESIS_FILE, SECRETS_FILE, + traits::ReadConfigWithBasePath, ChainConfig, EcosystemConfig, CONTRACTS_FILE, EN_CONFIG_FILE, + ERA_OBSERBAVILITY_DIR, GENERAL_FILE, GENESIS_FILE, SECRETS_FILE, }; use xshell::Shell; +use zksync_config::configs::Secrets; use super::args::UpdateArgs; -use crate::messages::{ - msg_diff_contracts_config, msg_diff_genesis_config, msg_diff_secrets, msg_updating_chain, - MSG_CHAIN_NOT_FOUND_ERR, MSG_DIFF_EN_CONFIG, MSG_DIFF_EN_GENERAL_CONFIG, - MSG_DIFF_GENERAL_CONFIG, MSG_PULLING_ZKSYNC_CODE_SPINNER, - MSG_UPDATING_ERA_OBSERVABILITY_SPINNER, MSG_UPDATING_SUBMODULES_SPINNER, MSG_UPDATING_ZKSYNC, - MSG_ZKSYNC_UPDATED, +use crate::{ + consts::{PROVER_MIGRATIONS, SERVER_MIGRATIONS}, + messages::{ + msg_diff_contracts_config, msg_diff_genesis_config, msg_diff_secrets, msg_updating_chain, + MSG_CHAIN_NOT_FOUND_ERR, MSG_DIFF_EN_CONFIG, MSG_DIFF_EN_GENERAL_CONFIG, + MSG_DIFF_GENERAL_CONFIG, MSG_PULLING_ZKSYNC_CODE_SPINNER, + MSG_UPDATING_ERA_OBSERVABILITY_SPINNER, MSG_UPDATING_SUBMODULES_SPINNER, + MSG_UPDATING_ZKSYNC, MSG_ZKSYNC_UPDATED, + }, }; -pub fn run(shell: &Shell, args: UpdateArgs) -> anyhow::Result<()> { +pub async fn run(shell: &Shell, args: UpdateArgs) -> anyhow::Result<()> { logger::info(MSG_UPDATING_ZKSYNC); let ecosystem = EcosystemConfig::from_file(shell)?; @@ -48,7 +53,8 @@ pub fn run(shell: &Shell, args: UpdateArgs) -> anyhow::Result<()> { &genesis_config_path, &contracts_config_path, &secrets_path, - )?; + ) + .await?; } let path_to_era_observability = shell.current_dir().join(ERA_OBSERBAVILITY_DIR); @@ -114,7 +120,7 @@ fn update_config( Ok(()) } -fn update_chain( +async fn update_chain( shell: &Shell, chain: &ChainConfig, general: &Path, @@ -177,5 +183,17 @@ fn update_chain( )?; } + let secrets = Secrets::read_with_base_path(shell, secrets)?; + + if let Some(db) = secrets.database { + if let Some(url) = db.server_url { + let path_to_migration = chain.link_to_code.join(SERVER_MIGRATIONS); + migrate_db(shell, path_to_migration, url.expose_url()).await?; + } + if let Some(url) = db.prover_url { + let path_to_migration = chain.link_to_code.join(PROVER_MIGRATIONS); + migrate_db(shell, path_to_migration, url.expose_url()).await?; + } + } Ok(()) } diff --git a/zk_toolbox/crates/zk_inception/src/main.rs b/zk_toolbox/crates/zk_inception/src/main.rs index 474c12130849..0af9922d0c41 100644 --- a/zk_toolbox/crates/zk_inception/src/main.rs +++ b/zk_toolbox/crates/zk_inception/src/main.rs @@ -135,7 +135,7 @@ async fn run_subcommand(inception_args: Inception, shell: &Shell) -> anyhow::Res InceptionSubcommands::Explorer(args) => commands::explorer::run(shell, args).await?, InceptionSubcommands::Consensus(cmd) => cmd.run(shell).await?, InceptionSubcommands::Portal => commands::portal::run(shell).await?, - InceptionSubcommands::Update(args) => commands::update::run(shell, args)?, + InceptionSubcommands::Update(args) => commands::update::run(shell, args).await?, InceptionSubcommands::Markdown => { clap_markdown::print_help_markdown::(); }