Skip to content

Commit

Permalink
Squashed commit of sigp#3775
Browse files Browse the repository at this point in the history
Squashed commit of the following:

commit ad08d07
Author: Paul Hauner <paul@paulhauner.com>
Date:   Mon Dec 5 16:51:06 2022 +1100

    Remove crits for late block

commit 8e85d62
Author: Paul Hauner <paul@paulhauner.com>
Date:   Mon Dec 5 16:48:43 2022 +1100

    Downgrade log for payload reveal failure

commit 84392d6
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Fri Dec 2 00:07:43 2022 +0000

    Delete DB schema migrations for v11 and earlier (sigp#3761)

    ## Proposed Changes

    Now that the Gnosis merge is scheduled, all users should have upgraded beyond Lighthouse v3.0.0. Accordingly we can delete schema migrations for versions prior to v3.0.0.

    ## Additional Info

    I also deleted the state cache stuff I added in sigp#3714 as it turned out to be useless for the light client proofs due to the one-slot offset.

commit 18c9be5
Author: Mac L <mjladson@pm.me>
Date:   Thu Dec 1 06:03:53 2022 +0000

    Add API endpoint to count statuses of all validators (sigp#3756)

    ## Issue Addressed

    sigp#3724

    ## Proposed Changes

    Adds an endpoint to quickly count the number of occurances of each status in the validator set.

    ## Usage

    ```bash
    curl -X GET "http://localhost:5052/lighthouse/ui/validator_count" -H "accept: application/json" | jq
    ```

    ```json
    {
      "data": {
        "active_ongoing":479508,
        "active_exiting":0,
        "active_slashed":0,
        "pending_initialized":28,
        "pending_queued":0,
        "withdrawal_possible":933,
        "withdrawal_done":0,
        "exited_unslashed":0,
        "exited_slashed":3
      }
    }
    ```

commit 2211504
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Wed Nov 30 05:22:58 2022 +0000

    Prioritise important parts of block processing (sigp#3696)

    ## Issue Addressed

    Closes sigp#2327

    ## Proposed Changes

    This is an extension of some ideas I implemented while working on `tree-states`:

    - Cache the indexed attestations from blocks in the `ConsensusContext`. Previously we were re-computing them 3-4 times over.
    - Clean up `import_block` by splitting each part into `import_block_XXX`.
    - Move some stuff off hot paths, specifically:
      - Relocate non-essential tasks that were running between receiving the payload verification status and priming the early attester cache. These tasks are moved after the cache priming:
        - Attestation observation
        - Validator monitor updates
        - Slasher updates
        - Updating the shuffling cache
      - Fork choice attestation observation now happens at the end of block verification in parallel with payload verification (this seems to save 5-10ms).
      - Payload verification now happens _before_ advancing the pre-state and writing it to disk! States were previously being written eagerly and adding ~20-30ms in front of verifying the execution payload. State catchup also sometimes takes ~500ms if we get a cache miss and need to rebuild the tree hash cache.

    The remaining task that's taking substantial time (~20ms) is importing the block to fork choice. I _think_ this is because of pull-tips, and we should be able to optimise it out with a clever total active balance cache in the state (which would be computed in parallel with payload verification). I've decided to leave that for future work though. For now it can be observed via the new `beacon_block_processing_post_exec_pre_attestable_seconds` metric.

    Co-authored-by: Michael Sproul <micsproul@gmail.com>

commit b4f4c0d
Author: Divma <divma@protonmail.com>
Date:   Wed Nov 30 03:21:35 2022 +0000

    Ipv6 bootnodes (sigp#3752)

    ## Issue Addressed
    our bootnodes as of now support only ipv4. this makes it so that they support ipv6

    ## Proposed Changes
    - Adds code necessary to update the bootnodes to run on dual stack nodes and therefore contact and store ipv6 nodes.
    - Adds some metrics about connectivity type of stored peers. It might have been nice to see some metrics over the sessions but that feels out of scope right now.

    ## Additional Info
    - some code quality improvements sneaked in since the changes seemed small
    - I think it depends on the OS, but enabling mapped addresses on an ipv6 node without dual stack support enabled could fail silently, making these nodes effectively ipv6 only. In the future I'll probably change this to use two sockets, which should fail loudly

commit 3534c85
Author: GeemoCandama <geemo@tutanota.com>
Date:   Tue Nov 29 08:19:27 2022 +0000

    Optimize finalized chain sync by skipping newPayload messages (sigp#3738)

    ## Issue Addressed

    sigp#3704

    ## Proposed Changes
    Adds is_syncing_finalized: bool parameter for block verification functions. Sets the payload_verification_status to Optimistic if is_syncing_finalized is true. Uses SyncState in NetworkGlobals in BeaconProcessor to retrieve the syncing status.

    ## Additional Info
    I could implement FinalizedSignatureVerifiedBlock if you think it would be nicer.

commit a2969ba
Author: Paul Hauner <paul@paulhauner.com>
Date:   Tue Nov 29 05:51:42 2022 +0000

    Improve debugging experience for builder proposals (sigp#3725)

    ## Issue Addressed

    NA

    ## Proposed Changes

    This PR sets out to improve the logging/metrics experience when interacting with the builder. Namely, it:

    - Adds/changes metrics (see "Metrics Changes" section).
    - Adds new logs which show the duration of requests to the builder/local EL.
    - Refactors existing logs for consistency and so that the `parent_hash` is include in all relevant logs (we can grep for this field when trying to trace the flow of block production).

    Additionally, when I was implementing this PR I noticed that we skip some verification of the builder payload in the scenario where the builder return `Ok` but the local EL returns with `Err`. Namely, we were skipping the bid signature and other values like parent hash and prev randao. In this PR I've changed it so we *always* check these values and reject the bid if they're incorrect. With these changes, we'll sometimes choose to skip a proposal rather than propose something invalid -- that's the only side-effect to the changes that I can see.

    ## Metrics Changes

    - Changed: `execution_layer_request_times`:
        - `method = "get_blinded_payload_local"`: time taken to get a payload from a local EE.
        - `method = "get_blinded_payload_builder"`: time taken to get a blinded payload from a builder.
        - `method = "post_blinded_payload_builder"`: time taken to get a builder to reveal a payload they've previously supplied us.
    - `execution_layer_get_payload_outcome`
        - `outcome = "success"`: we successfully produced a payload from a builder or local EE.
        - `outcome = "failure"`: we were unable to get a payload from a builder or local EE.
    - New: `execution_layer_builder_reveal_payload_outcome`
        - `outcome = "success"`: a builder revealed a payload from a signed, blinded block.
        - `outcome = "failure"`: the builder did not reveal the payload.
    - New: `execution_layer_get_payload_source`
        - `type = "builder"`: we used a payload from a builder to produce a block.
        - `type = "local"`: we used a payload from a local EE to produce a block.
    - New: `execution_layer_get_payload_builder_rejections` has a `reason` field to describe why we rejected a payload from a builder.
    - New: `execution_layer_payload_bids` tracks the bid (in gwei) from the builder or local EE (local EE not yet supported, waiting on EEs to expose the value). Can only record values that fit inside an i64 (roughly 9 million ETH).
    ## Additional Info

    NA

commit 99ec9d9
Author: kevinbogner <kevbogner@gmail.com>
Date:   Mon Nov 28 10:05:43 2022 +0000

    Add Run a Node guide (sigp#3681)

    ## Issue Addressed

    Related to sigp#3672

    ## Proposed Changes

    - Added a guide to run a node. Mainly, copy and paste from 'Merge Migration' and 'Checkpoint Sync'.
    - Ranked it high in ToC:
      - Introduction
      - Installation
      - Run a Node
      - Become a Validator
    	...
    - Hid 'Merge Migration' in ToC.

    ## Additional Info

    - Should I add/rephrase/delete something?
    - Now there is some redundancy:
      - 'Run a node' and 'Checkpoint Sync' contain similar information.
      - Same for 'Run a node' and 'Become a Validator'.

    Co-authored-by: kevinbogner <114221396+kevinbogner@users.noreply.github.com>
    Co-authored-by: Michael Sproul <micsproul@gmail.com>

commit 2779017
Author: Age Manning <Age@AgeManning.com>
Date:   Mon Nov 28 07:36:52 2022 +0000

    Gossipsub fast message id change (sigp#3755)

    For improved consistency, this mixes in the topic into our fast message id for more consistent tracking of messages across topics.

commit c881b80
Author: Mac L <mjladson@pm.me>
Date:   Mon Nov 28 00:22:53 2022 +0000

    Add CLI flag for gui requirements (sigp#3731)

    ## Issue Addressed

    sigp#3723

    ## Proposed Changes

    Adds a new CLI flag `--gui` which enables all the various flags required for the gui to function properly.
    Currently enables the `--http` and `--validator-monitor-auto` flags.

commit 969ff24
Author: Mac L <mjladson@pm.me>
Date:   Fri Nov 25 07:57:11 2022 +0000

    Add CLI flag to opt in to world-readable log files (sigp#3747)

    ## Issue Addressed

    sigp#3732

    ## Proposed Changes

    Add a CLI flag to allow users to opt out of the restrictive permissions of the log files.

    ## Additional Info

    This is not recommended for most users. The log files can contain sensitive information such as validator indices, public keys and API tokens (see sigp#2438). However some users using a multi-user setup may find this helpful if they understand the risks involved.

commit e9bf7f7
Author: antondlr <anton@delaruelle.net>
Date:   Fri Nov 25 07:57:10 2022 +0000

    remove commas from comma-separated kv pairs (sigp#3737)

    ## Issue Addressed

    Logs are in comma separated kv list, but the values sometimes contain commas, which breaks parsing

commit d5a2de7
Author: Giulio rebuffo <giulio.rebuffo@gmail.com>
Date:   Fri Nov 25 05:19:00 2022 +0000

    Added LightClientBootstrap V1  (sigp#3711)

    ## Issue Addressed

    Partially addresses sigp#3651

    ## Proposed Changes

    Adds server-side support for light_client_bootstrap_v1 topic

    ## Additional Info

    This PR, creates each time a bootstrap without using cache, I do not know how necessary a cache is in this case as this topic is not supposed to be called frequently and IMHO we can just prevent abuse by using the limiter, but let me know what you think or if there is any caveat to this, or if it is necessary only for the sake of good practice.

    Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
  • Loading branch information
paulhauner committed Dec 9, 2022
1 parent ad5406a commit 08fe3f8
Show file tree
Hide file tree
Showing 76 changed files with 1,738 additions and 2,035 deletions.
784 changes: 458 additions & 326 deletions beacon_node/beacon_chain/src/beacon_chain.rs

Large diffs are not rendered by default.

22 changes: 7 additions & 15 deletions beacon_node/beacon_chain/src/beacon_fork_choice_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ pub fn get_effective_balances<T: EthSpec>(state: &BeaconState<T>) -> Vec<u64> {
}

#[superstruct(
variants(V1, V8),
variants(V8),
variant_attributes(derive(PartialEq, Clone, Debug, Encode, Decode)),
no_enum
)]
Expand All @@ -75,13 +75,11 @@ pub(crate) struct CacheItem {
pub(crate) type CacheItem = CacheItemV8;

#[superstruct(
variants(V1, V8),
variants(V8),
variant_attributes(derive(PartialEq, Clone, Default, Debug, Encode, Decode)),
no_enum
)]
pub struct BalancesCache {
#[superstruct(only(V1))]
pub(crate) items: Vec<CacheItemV1>,
#[superstruct(only(V8))]
pub(crate) items: Vec<CacheItemV8>,
}
Expand Down Expand Up @@ -366,26 +364,20 @@ where
}

/// A container which allows persisting the `BeaconForkChoiceStore` to the on-disk database.
#[superstruct(
variants(V1, V7, V8, V10, V11),
variant_attributes(derive(Encode, Decode)),
no_enum
)]
#[superstruct(variants(V11), variant_attributes(derive(Encode, Decode)), no_enum)]
pub struct PersistedForkChoiceStore {
#[superstruct(only(V1, V7))]
pub balances_cache: BalancesCacheV1,
#[superstruct(only(V8, V10, V11))]
#[superstruct(only(V11))]
pub balances_cache: BalancesCacheV8,
pub time: Slot,
pub finalized_checkpoint: Checkpoint,
pub justified_checkpoint: Checkpoint,
pub justified_balances: Vec<u64>,
pub best_justified_checkpoint: Checkpoint,
#[superstruct(only(V10, V11))]
#[superstruct(only(V11))]
pub unrealized_justified_checkpoint: Checkpoint,
#[superstruct(only(V10, V11))]
#[superstruct(only(V11))]
pub unrealized_finalized_checkpoint: Checkpoint,
#[superstruct(only(V7, V8, V10, V11))]
#[superstruct(only(V11))]
pub proposer_boost_root: Hash256,
#[superstruct(only(V11))]
pub equivocating_indices: BTreeSet<u64>,
Expand Down
259 changes: 156 additions & 103 deletions beacon_node/beacon_chain/src/block_verification.rs

Large diffs are not rendered by default.

3 changes: 0 additions & 3 deletions beacon_node/beacon_chain/src/chain_config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,6 @@ pub struct ChainConfig {
pub count_unrealized_full: CountUnrealizedFull,
/// Optionally set timeout for calls to checkpoint sync endpoint.
pub checkpoint_sync_url_timeout: u64,
/// Whether to enable the light client server protocol.
pub enable_light_client_server: bool,
}

impl Default for ChainConfig {
Expand All @@ -70,7 +68,6 @@ impl Default for ChainConfig {
paranoid_block_proposal: false,
count_unrealized_full: CountUnrealizedFull::default(),
checkpoint_sync_url_timeout: 60,
enable_light_client_server: false,
}
}
}
48 changes: 33 additions & 15 deletions beacon_node/beacon_chain/src/execution_payload.rs
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,16 @@ pub enum AllowOptimisticImport {
No,
}

/// Signal whether the execution payloads of new blocks should be
/// immediately verified with the EL or imported optimistically without
/// any EL communication.
#[derive(Default, Clone, Copy)]
pub enum NotifyExecutionLayer {
#[default]
Yes,
No,
}

/// Used to await the result of executing payload with a remote EE.
pub struct PayloadNotifier<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>,
Expand All @@ -47,21 +57,28 @@ impl<T: BeaconChainTypes> PayloadNotifier<T> {
chain: Arc<BeaconChain<T>>,
block: Arc<SignedBeaconBlock<T::EthSpec>>,
state: &BeaconState<T::EthSpec>,
notify_execution_layer: NotifyExecutionLayer,
) -> Result<Self, BlockError<T::EthSpec>> {
let payload_verification_status = if is_execution_enabled(state, block.message().body()) {
// Perform the initial stages of payload verification.
//
// We will duplicate these checks again during `per_block_processing`, however these checks
// are cheap and doing them here ensures we protect the execution engine from junk.
partially_verify_execution_payload(
state,
block.message().execution_payload()?,
&chain.spec,
)
.map_err(BlockError::PerBlockProcessingError)?;
None
} else {
Some(PayloadVerificationStatus::Irrelevant)
let payload_verification_status = match notify_execution_layer {
NotifyExecutionLayer::No => Some(PayloadVerificationStatus::Optimistic),
NotifyExecutionLayer::Yes => {
if is_execution_enabled(state, block.message().body()) {
// Perform the initial stages of payload verification.
//
// We will duplicate these checks again during `per_block_processing`, however these checks
// are cheap and doing them here ensures we protect the execution engine from junk.
partially_verify_execution_payload(
state,
block.slot(),
block.message().execution_payload()?,
&chain.spec,
)
.map_err(BlockError::PerBlockProcessingError)?;
None
} else {
Some(PayloadVerificationStatus::Irrelevant)
}
}
};

Ok(Self {
Expand Down Expand Up @@ -357,7 +374,8 @@ pub fn get_execution_payload<
let spec = &chain.spec;
let current_epoch = state.current_epoch();
let is_merge_transition_complete = is_merge_transition_complete(state);
let timestamp = compute_timestamp_at_slot(state, spec).map_err(BeaconStateError::from)?;
let timestamp =
compute_timestamp_at_slot(state, state.slot(), spec).map_err(BeaconStateError::from)?;
let random = *state.get_randao_mix(current_epoch)?;
let latest_execution_payload_header_block_hash =
state.latest_execution_payload_header()?.block_hash;
Expand Down
1 change: 1 addition & 0 deletions beacon_node/beacon_chain/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ pub use canonical_head::{CachedHead, CanonicalHead, CanonicalHeadRwLock};
pub use eth1_chain::{Eth1Chain, Eth1ChainBackend};
pub use events::ServerSentEventHandler;
pub use execution_layer::EngineState;
pub use execution_payload::NotifyExecutionLayer;
pub use fork_choice::{ExecutionStatus, ForkchoiceUpdateParameters};
pub use metrics::scrape_for_metrics;
pub use parking_lot;
Expand Down
5 changes: 5 additions & 0 deletions beacon_node/beacon_chain/src/metrics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,11 @@ lazy_static! {
"beacon_block_processing_state_root_seconds",
"Time spent calculating the state root when processing a block."
);
pub static ref BLOCK_PROCESSING_POST_EXEC_PROCESSING: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_processing_post_exec_pre_attestable_seconds",
"Time between finishing execution processing and the block becoming attestable",
linear_buckets(5e-3, 5e-3, 10)
);
pub static ref BLOCK_PROCESSING_DB_WRITE: Result<Histogram> = try_create_histogram(
"beacon_block_processing_db_write_seconds",
"Time spent writing a newly processed block and state to DB"
Expand Down
23 changes: 2 additions & 21 deletions beacon_node/beacon_chain/src/persisted_fork_choice.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,4 @@
use crate::beacon_fork_choice_store::{
PersistedForkChoiceStoreV1, PersistedForkChoiceStoreV10, PersistedForkChoiceStoreV11,
PersistedForkChoiceStoreV7, PersistedForkChoiceStoreV8,
};
use crate::beacon_fork_choice_store::PersistedForkChoiceStoreV11;
use ssz::{Decode, Encode};
use ssz_derive::{Decode, Encode};
use store::{DBColumn, Error, StoreItem};
Expand All @@ -10,21 +7,9 @@ use superstruct::superstruct;
// If adding a new version you should update this type alias and fix the breakages.
pub type PersistedForkChoice = PersistedForkChoiceV11;

#[superstruct(
variants(V1, V7, V8, V10, V11),
variant_attributes(derive(Encode, Decode)),
no_enum
)]
#[superstruct(variants(V11), variant_attributes(derive(Encode, Decode)), no_enum)]
pub struct PersistedForkChoice {
pub fork_choice: fork_choice::PersistedForkChoice,
#[superstruct(only(V1))]
pub fork_choice_store: PersistedForkChoiceStoreV1,
#[superstruct(only(V7))]
pub fork_choice_store: PersistedForkChoiceStoreV7,
#[superstruct(only(V8))]
pub fork_choice_store: PersistedForkChoiceStoreV8,
#[superstruct(only(V10))]
pub fork_choice_store: PersistedForkChoiceStoreV10,
#[superstruct(only(V11))]
pub fork_choice_store: PersistedForkChoiceStoreV11,
}
Expand All @@ -47,8 +32,4 @@ macro_rules! impl_store_item {
};
}

impl_store_item!(PersistedForkChoiceV1);
impl_store_item!(PersistedForkChoiceV7);
impl_store_item!(PersistedForkChoiceV8);
impl_store_item!(PersistedForkChoiceV10);
impl_store_item!(PersistedForkChoiceV11);
163 changes: 3 additions & 160 deletions beacon_node/beacon_chain/src/schema_change.rs
Original file line number Diff line number Diff line change
@@ -1,20 +1,9 @@
//! Utilities for managing database schema changes.
mod migration_schema_v10;
mod migration_schema_v11;
mod migration_schema_v12;
mod migration_schema_v13;
mod migration_schema_v6;
mod migration_schema_v7;
mod migration_schema_v8;
mod migration_schema_v9;
mod types;

use crate::beacon_chain::{BeaconChainTypes, ETH1_CACHE_DB_KEY, FORK_CHOICE_DB_KEY};
use crate::beacon_chain::{BeaconChainTypes, ETH1_CACHE_DB_KEY};
use crate::eth1_chain::SszEth1;
use crate::persisted_fork_choice::{
PersistedForkChoiceV1, PersistedForkChoiceV10, PersistedForkChoiceV11, PersistedForkChoiceV7,
PersistedForkChoiceV8,
};
use crate::types::ChainSpec;
use slog::{warn, Logger};
use std::sync::Arc;
Expand All @@ -23,6 +12,7 @@ use store::metadata::{SchemaVersion, CURRENT_SCHEMA_VERSION};
use store::{Error as StoreError, StoreItem};

/// Migrate the database from one schema version to another, applying all requisite mutations.
#[allow(clippy::only_used_in_recursion)] // spec is not used but likely to be used in future
pub fn migrate_schema<T: BeaconChainTypes>(
db: Arc<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
deposit_contract_deploy_block: u64,
Expand Down Expand Up @@ -62,156 +52,9 @@ pub fn migrate_schema<T: BeaconChainTypes>(
}

//
// Migrations from before SchemaVersion(5) are deprecated.
// Migrations from before SchemaVersion(11) are deprecated.
//

// Migration for adding `execution_status` field to the fork choice store.
(SchemaVersion(5), SchemaVersion(6)) => {
// Database operations to be done atomically
let mut ops = vec![];

// The top-level `PersistedForkChoice` struct is still V1 but will have its internal
// bytes for the fork choice updated to V6.
let fork_choice_opt = db.get_item::<PersistedForkChoiceV1>(&FORK_CHOICE_DB_KEY)?;
if let Some(mut persisted_fork_choice) = fork_choice_opt {
migration_schema_v6::update_execution_statuses::<T>(&mut persisted_fork_choice)
.map_err(StoreError::SchemaMigrationError)?;

// Store the converted fork choice store under the same key.
ops.push(persisted_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
// 1. Add `proposer_boost_root`.
// 2. Update `justified_epoch` to `justified_checkpoint` and `finalized_epoch` to
// `finalized_checkpoint`.
// 3. This migration also includes a potential update to the justified
// checkpoint in case the fork choice store's justified checkpoint and finalized checkpoint
// combination does not actually exist for any blocks in fork choice. This was possible in
// the consensus spec prior to v1.1.6.
//
// Relevant issues:
//
// https://github.com/sigp/lighthouse/issues/2741
// https://github.com/ethereum/consensus-specs/pull/2727
// https://github.com/ethereum/consensus-specs/pull/2730
(SchemaVersion(6), SchemaVersion(7)) => {
// Database operations to be done atomically
let mut ops = vec![];

let fork_choice_opt = db.get_item::<PersistedForkChoiceV1>(&FORK_CHOICE_DB_KEY)?;
if let Some(persisted_fork_choice_v1) = fork_choice_opt {
// This migrates the `PersistedForkChoiceStore`, adding the `proposer_boost_root` field.
let mut persisted_fork_choice_v7 = persisted_fork_choice_v1.into();

let result = migration_schema_v7::update_fork_choice::<T>(
&mut persisted_fork_choice_v7,
db.clone(),
);

// Fall back to re-initializing fork choice from an anchor state if necessary.
if let Err(e) = result {
warn!(log, "Unable to migrate to database schema 7, re-initializing fork choice"; "error" => ?e);
migration_schema_v7::update_with_reinitialized_fork_choice::<T>(
&mut persisted_fork_choice_v7,
db.clone(),
spec,
)
.map_err(StoreError::SchemaMigrationError)?;
}

// Store the converted fork choice store under the same key.
ops.push(persisted_fork_choice_v7.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
// Migration to add an `epoch` key to the fork choice's balances cache.
(SchemaVersion(7), SchemaVersion(8)) => {
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV7>(&FORK_CHOICE_DB_KEY)?;
if let Some(fork_choice) = fork_choice_opt {
let updated_fork_choice =
migration_schema_v8::update_fork_choice::<T>(fork_choice, db.clone())?;

ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
// Upgrade from v8 to v9 to separate the execution payloads into their own column.
(SchemaVersion(8), SchemaVersion(9)) => {
migration_schema_v9::upgrade_to_v9::<T>(db.clone(), log)?;
db.store_schema_version(to)
}
// Downgrade from v9 to v8 to ignore the separation of execution payloads
// NOTE: only works before the Bellatrix fork epoch.
(SchemaVersion(9), SchemaVersion(8)) => {
migration_schema_v9::downgrade_from_v9::<T>(db.clone(), log)?;
db.store_schema_version(to)
}
(SchemaVersion(9), SchemaVersion(10)) => {
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV8>(&FORK_CHOICE_DB_KEY)?;
if let Some(fork_choice) = fork_choice_opt {
let updated_fork_choice = migration_schema_v10::update_fork_choice(fork_choice)?;

ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
(SchemaVersion(10), SchemaVersion(9)) => {
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV10>(&FORK_CHOICE_DB_KEY)?;
if let Some(fork_choice) = fork_choice_opt {
let updated_fork_choice = migration_schema_v10::downgrade_fork_choice(fork_choice)?;

ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
// Upgrade from v10 to v11 adding support for equivocating indices to fork choice.
(SchemaVersion(10), SchemaVersion(11)) => {
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV10>(&FORK_CHOICE_DB_KEY)?;
if let Some(fork_choice) = fork_choice_opt {
let updated_fork_choice = migration_schema_v11::update_fork_choice(fork_choice);

ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
// Downgrade from v11 to v10 removing support for equivocating indices from fork choice.
(SchemaVersion(11), SchemaVersion(10)) => {
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV11>(&FORK_CHOICE_DB_KEY)?;
if let Some(fork_choice) = fork_choice_opt {
let updated_fork_choice =
migration_schema_v11::downgrade_fork_choice(fork_choice, log);

ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}

db.store_schema_version_atomically(to, ops)?;

Ok(())
}
// Upgrade from v11 to v12 to store richer metadata in the attestation op pool.
(SchemaVersion(11), SchemaVersion(12)) => {
let ops = migration_schema_v12::upgrade_to_v12::<T>(db.clone(), log)?;
Expand Down
Loading

0 comments on commit 08fe3f8

Please sign in to comment.