-
Notifications
You must be signed in to change notification settings - Fork 748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetch blobs from EL prior to block verification #6600
base: unstable
Are you sure you want to change the base?
Conversation
Any security considerations on triggering this logic before validating the block? The most damage a proposer can do is waste bandwidth on a bad proposal. This does not seem like a big issue and can be done anyway regardless of fetch_blobs. Else the experimental results look great |
We're doing this after gossip validation of the block, so we know that the proposer's signature is valid and they are a legitimate proposer for the slot. Unless the proposer slashes themselves, the blob versioned hashes in the block header are the "true" (valid) versioned hashes for this slot. Alternatively the block could be completely invalid (but not slashable), in which case we will reject it upon completion of block processing. As part of lighthouse/beacon_node/beacon_chain/src/fetch_blobs.rs Lines 131 to 148 in 6e1945f
So if they are malformed (e.g. bad KZG proof), they will be rejected at this point. TL;DR on the whole I think it's security-equivalent to processing blobs on gossip:
|
beacon_node/network/src/network_beacon_processor/gossip_methods.rs
Outdated
Show resolved
Hide resolved
self.executor.spawn( | ||
async move { | ||
self_clone | ||
.fetch_engine_blobs_and_publish(block_clone, block_root, publish_blobs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is running as a task, it's no longer bound by the beacon processor queue. Could someone spam gossip blocks and cause a lot of fetch blobs work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They would need to be beacon blocks with valid signatures, and this is a linear factor, so it can't really blow up much beyond the number of threads allocated to the beacon processor. E.g. if we have 16 threads in the BP, we might end up with 32 running tasks max, which are mostly I/O bound and should be handled just fine by Tokio.
We do this in a few other places, like when we check the payload with the EL:
lighthouse/beacon_node/beacon_chain/src/block_verification.rs
Lines 1407 to 1416 in 6329042
// Spawn the payload verification future as a new task, but don't wait for it to complete. | |
// The `payload_verification_future` will be awaited later to ensure verification completed | |
// successfully. | |
let payload_verification_handle = chain | |
.task_executor | |
.spawn_handle( | |
payload_verification_future, | |
"execution_payload_verification", | |
) | |
.ok_or(BeaconChainError::RuntimeShutdown)?; |
Squashed commit of the following: commit 5f563ef Author: Michael Sproul <michael@sigmaprime.io> Date: Fri Nov 22 12:33:10 2024 +1100 Run fetch blobs in parallel with block import commit 3cfe9df Author: Michael Sproul <michael@sigmaprime.io> Date: Thu Nov 21 10:46:34 2024 +1100 Fetch blobs from EL prior to block verification
Proposed Changes
Optimise
fetch_blobs
significantly, by fetching blobs from the EL prior to consensus and execution verification of the block.We had noticed that we weren't getting many hits with fetch blobs, and this was because blobs were almost always arriving on gossip prior to us requesting them. Only a few times an hour would the
fetch_blobs
logic actually fire.With this change I'm seeing much more frequent hits, without a substantial increase in publication bandwidth. In the last 30 mins running on mainnet there have been 116 hits, and 156 individual blobs published (out of 395 fetched).
Data here: https://docs.google.com/spreadsheets/d/1ZJIYbOPwNGa_veqUC0ywsOdzFYvh4aqJLMYqFoisA_E/edit?usp=sharing
This does imply that we're publishing around 35% of all blobs! But this will likely come down as more nodes chip in to publishing.