Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
f469db1
Initial eth_config implementation
SDartayet Sep 17, 2025
bb6448b
Fixes to eth_config implementation
SDartayet Sep 18, 2025
d052ea3
Fixed fork hash
SDartayet Sep 18, 2025
96bbd66
Refactored precompiles
SDartayet Sep 19, 2025
b62647d
Added next and last
SDartayet Sep 23, 2025
3c10e6e
Fix block timestamp
SDartayet Sep 23, 2025
d7c79b7
Fix to getting latest block timestamp
SDartayet Sep 23, 2025
4d3d6fc
Fixed issues with forkid and getting next fork
SDartayet Sep 23, 2025
b4f0ca7
Fixed issue with previous commit
SDartayet Sep 23, 2025
72c3c5d
Fixed bug with getting fork id
SDartayet Sep 23, 2025
fdf462b
Fixing lint and compilation issues
SDartayet Sep 23, 2025
31596cc
Fixing lint issues
SDartayet Sep 23, 2025
417aabe
Renamed file and removed unused function
SDartayet Sep 23, 2025
dd8a9f5
Renamed file and removed unused function
SDartayet Sep 23, 2025
39143cd
Merge branch 'main' into eth-config-method
SDartayet Sep 23, 2025
1ec3fb0
Fixed failing test and added test for method
SDartayet Sep 24, 2025
3ac2b7c
Merge branch 'main' into eth-config-method
SDartayet Sep 24, 2025
44fff58
Fixing bug with test
SDartayet Sep 24, 2025
ed07866
Merge branch 'main' into eth-config-method
SDartayet Sep 25, 2025
52915dd
Fixing issues with new method
SDartayet Sep 25, 2025
debcf12
Moved precompiles and system contracts back to VM crate
SDartayet Sep 26, 2025
c986380
Fixing error in previous commit
SDartayet Sep 26, 2025
7fe8189
Merge branch 'main' into eth-config-method
SDartayet Sep 26, 2025
cda9ff1
Fixing address casing
SDartayet Sep 26, 2025
40995d7
Corrected wrong system contract fork
SDartayet Sep 26, 2025
f00a1b5
Fixed error with system contracts
SDartayet Sep 26, 2025
65bf006
Merge branch 'main' into eth-config-method
SDartayet Sep 26, 2025
0d0d734
Fixing review comments and merge issue
SDartayet Sep 26, 2025
0e3856d
Updated eth config test
SDartayet Sep 26, 2025
3d38f78
Added missing precompile in test
SDartayet Sep 26, 2025
e7857f3
Fixing tests
SDartayet Sep 26, 2025
25541ed
Update rpc.rs
SDartayet Sep 29, 2025
677920a
Changed unwrap or default for returning error
SDartayet Oct 1, 2025
8fd1abe
Merge branch 'main' into eth-config-method
SDartayet Oct 1, 2025
b6d3b4f
Merge branch 'main' into eth-config-method
SDartayet Oct 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 93 additions & 2 deletions crates/common/types/genesis.rs
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,9 @@ impl TryFrom<&Path> for Genesis {
)]
#[serde(rename_all = "camelCase")]
pub struct ForkBlobSchedule {
pub target: u32,
pub max: u32,
pub base_fee_update_fraction: u64,
pub max: u32,
pub target: u32,
Comment on lines -114 to +116
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reordered the fields so they'd get serialized in alphabetical order, since the spec for the methods requires them to be so

}

#[allow(unused)]
Expand Down Expand Up @@ -460,6 +460,97 @@ impl ChainConfig {
self.get_fork(block_timestamp)
}

pub fn next_fork(&self, block_timestamp: u64) -> Option<Fork> {
let next = if self.is_bpo5_activated(block_timestamp) {
None
} else if self.is_bpo4_activated(block_timestamp) && self.bpo5_time.is_some() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we need to implement ordering of forks, then this would be simpler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think just ordering of forks would suffice; unless there's an approach I'm missing. One, because just ordering wouldn't be enough to ask a fork what the fork after it would be; we'd still need to implement from for Fork or ahave an array with all the enum variants for Fork indexed by its number (or use a crate like strum. Two, because we'd also need to know if the fork is activated, which could only be done if the ChainConfig struct had an array for the fork timestamps we were indexing with the number assigned to the fork.
I still think doing the necessary refactors for that would be worth it, for the sake of future maintainability reasons too; I decided not to do it to not make this PR too big since it seemed a bit out of scope.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, please create a follow up ticket. Not planning to work on this right now, but it's good to have one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some(Fork::BPO5)
} else if self.is_bpo3_activated(block_timestamp) && self.bpo4_time.is_some() {
Some(Fork::BPO4)
} else if self.is_bpo2_activated(block_timestamp) && self.bpo3_time.is_some() {
Some(Fork::BPO3)
} else if self.is_bpo1_activated(block_timestamp) && self.bpo2_time.is_some() {
Some(Fork::BPO2)
} else if self.is_osaka_activated(block_timestamp) && self.bpo1_time.is_some() {
Some(Fork::BPO1)
} else if self.is_prague_activated(block_timestamp) && self.osaka_time.is_some() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible for BPO3, for example, to be configured and BPO1 and BPO2 to not be? I believe this won't work in that case, so we should check that out, or add a sanity check and warning for those cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell, that shouldn't happen. On mainnet I'm pretty sure given the EIP specification if BPO N is activated then BPO N-1 is, and it wouldn't really make much sense for testnets to not follow the same convention. IMHO it's worth revisiting if such a situation comes up in the future, but I don't see why it'd happen.

Some(Fork::Osaka)
} else if self.is_cancun_activated(block_timestamp) && self.prague_time.is_some() {
Some(Fork::Prague)
} else if self.is_shanghai_activated(block_timestamp) && self.cancun_time.is_some() {
Some(Fork::Cancun)
} else {
None
};
match next {
Some(fork) if fork > self.fork(block_timestamp) => next,
_ => None,
}
}

pub fn get_last_scheduled_fork(&self) -> Fork {
if self.bpo5_time.is_some() {
Fork::BPO5
} else if self.bpo4_time.is_some() {
Fork::BPO4
} else if self.bpo3_time.is_some() {
Fork::BPO3
} else if self.bpo2_time.is_some() {
Fork::BPO2
} else if self.bpo1_time.is_some() {
Fork::BPO1
} else if self.osaka_time.is_some() {
Fork::Osaka
} else if self.prague_time.is_some() {
Fork::Prague
} else if self.cancun_time.is_some() {
Fork::Cancun
} else {
Fork::Paris
}
}

pub fn get_activation_timestamp_for_fork(&self, fork: Fork) -> Option<u64> {
match fork {
Fork::Cancun => self.cancun_time,
Fork::Prague => self.prague_time,
Fork::Osaka => self.osaka_time,
Fork::BPO1 => self.bpo1_time,
Fork::BPO2 => self.bpo2_time,
Fork::BPO3 => self.bpo3_time,
Fork::BPO4 => self.bpo4_time,
Fork::BPO5 => self.bpo5_time,
Fork::Homestead => self.homestead_block,
Fork::DaoFork => self.dao_fork_block,
Fork::Byzantium => self.byzantium_block,
Fork::Constantinople => self.constantinople_block,
Fork::Petersburg => self.petersburg_block,
Fork::Istanbul => self.istanbul_block,
Fork::MuirGlacier => self.muir_glacier_block,
Fork::Berlin => self.berlin_block,
Fork::London => self.london_block,
Fork::ArrowGlacier => self.arrow_glacier_block,
Fork::GrayGlacier => self.gray_glacier_block,
Fork::Paris => self.merge_netsplit_block,
Fork::Shanghai => self.shanghai_time,
_ => None,
}
}

pub fn get_blob_schedule_for_fork(&self, fork: Fork) -> Option<ForkBlobSchedule> {
match fork {
Fork::Cancun => Some(self.blob_schedule.cancun),
Fork::Prague => Some(self.blob_schedule.prague),
Fork::Osaka => Some(self.blob_schedule.osaka),
Fork::BPO1 => Some(self.blob_schedule.bpo1),
Fork::BPO2 => Some(self.blob_schedule.bpo2),
Fork::BPO3 => self.blob_schedule.bpo3,
Fork::BPO4 => self.blob_schedule.bpo4,
Fork::BPO5 => self.blob_schedule.bpo5,
_ => None,
}
}

pub fn gather_forks(&self, genesis_header: BlockHeader) -> (Vec<u64>, Vec<u64>) {
let mut block_number_based_forks: Vec<u64> = vec![
self.homestead_block,
Expand Down
1 change: 1 addition & 0 deletions crates/networking/rpc/engine/blobs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ impl RpcHandler for BlobsV1Request {

async fn handle(&self, context: RpcApiContext) -> Result<Value, RpcErr> {
info!("Received new engine request: Requested Blobs");

if self.blob_versioned_hashes.len() >= GET_BLOBS_V1_REQUEST_MAX_SIZE {
return Err(RpcErr::TooLargeRequest);
}
Expand Down
115 changes: 115 additions & 0 deletions crates/networking/rpc/eth/client.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,12 @@
use std::collections::BTreeMap;

use ethrex_common::H32;
use ethrex_common::H160;
use ethrex_common::serde_utils;
use ethrex_common::types::Fork;
use ethrex_common::types::ForkBlobSchedule;
use ethrex_common::types::ForkId;
use ethrex_vm::{precompiles_for_fork, system_contracts::system_contracts_for_fork};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tracing::debug;
Expand Down Expand Up @@ -62,3 +70,110 @@ impl RpcHandler for Syncing {
}
}
}

pub struct Config;

#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct EthConfigObject {
activation_time: Option<u64>,
blob_schedule: Option<ForkBlobSchedule>,
#[serde(with = "serde_utils::u64::hex_str")]
chain_id: u64,
fork_id: H32,
precompiles: BTreeMap<String, H160>,
system_contracts: BTreeMap<String, H160>,
}

#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct EthConfigResponse {
current: EthConfigObject,
next: Option<EthConfigObject>,
last: Option<EthConfigObject>,
}

impl RpcHandler for Config {
fn parse(_params: &Option<Vec<Value>>) -> Result<Self, RpcErr> {
Ok(Self {})
}

async fn handle(&self, context: RpcApiContext) -> Result<Value, RpcErr> {
let chain_config = context.storage.get_chain_config()?;
let Some(latest_block) = context
.storage
.get_block_by_number(context.storage.get_latest_block_number().await?)
.await?
else {
return Err(RpcErr::Internal("Failed to fetch latest block".to_string()));
};

let latest_block_timestamp = latest_block.header.timestamp;
let current_fork = chain_config.get_fork(latest_block_timestamp);

if current_fork < Fork::Paris {
return Err(RpcErr::UnsuportedFork(
"eth-config is not supported for forks prior to Paris".to_string(),
));
}

let current = get_config_for_fork(current_fork, &context).await?;
let next = if let Some(next_fork) = chain_config.next_fork(latest_block_timestamp) {
Some(get_config_for_fork(next_fork, &context).await?)
} else {
None
};
let last_fork = chain_config.get_last_scheduled_fork();
let last = if last_fork > current_fork {
Some(get_config_for_fork(last_fork, &context).await?)
} else {
None
};
let response = EthConfigResponse {
current,
next,
last,
};

serde_json::to_value(response).map_err(|error| RpcErr::Internal(error.to_string()))
}
}

async fn get_config_for_fork(
fork: Fork,
context: &RpcApiContext,
) -> Result<EthConfigObject, RpcErr> {
let chain_config = context.storage.get_chain_config()?;
let activation_time = chain_config.get_activation_timestamp_for_fork(fork);
let genesis_header = context
.storage
.get_block_by_number(0)
.await?
.expect("Failed to get genesis block. This should not happen.")
.header;
let block_number = context.storage.get_latest_block_number().await?;
let fork_id = if let Some(timestamp) = activation_time {
ForkId::new(chain_config, genesis_header, timestamp, block_number).fork_hash
Comment on lines +154 to +156
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does passing the latest block number here work? I thought the timestamp and block number need to be for the block which we want the fork ID from.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only meant for this to work properly with post-merge forks to be fair; since the EIP only requires the method to work form Cancun onwards. Probably should add a proper check and return an error for pre-Merge forks though (the EIP doesn't require the method to work for Paris and Shanghai either, but it's simple enough to return something sensible for them)

Copy link
Collaborator

@MegaRedHand MegaRedHand Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning an error for pre-Paris forks is OK. We should also add a comment here, mentioning the block number is not really correct, but being enough for post-merge forks.

} else {
H32::zero()
};
let mut system_contracts = BTreeMap::new();
for contract in system_contracts_for_fork(fork) {
system_contracts.insert(contract.name.to_string(), contract.address);
}

let mut precompiles = BTreeMap::new();

for precompile in precompiles_for_fork(fork) {
precompiles.insert(precompile.name.to_string(), precompile.address);
}

Ok(EthConfigObject {
activation_time,
blob_schedule: chain_config.get_blob_schedule_for_fork(fork),
chain_id: chain_config.chain_id,
fork_id,
precompiles,
system_contracts,
})
}
Loading
Loading