HIP-415: Introduction of Blocks #434
Daniel-K-Ivanov
started this conversation in
Edits
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Introduction of Blocks
hip: 415
title: Introduction of Blocks
author: Daniel Ivanov daniel-k-ivanov95@gmail.tech, Ivan Kavaldzhiev ivan.kavaldzhiev@limechain.tech
working-group: Danno Ferrin <@shemnon>, Richard Bair <@rbair23>, Steven Sheehysteven.sheehy@hedera.com, Mitchell Martin mitch@swirlds.com
type: Standards Track
category: Core
needs-council-approval: Yes
status: Draft
created: 2022-04-03
discussions-to: //TODO
updated: 2022-04-13
Abstract
Specifies how to introduce and formalize the concepts of Blocks in Hedera Hashgraph so that it can be used as a foundation on which further interoperability with existing DLT networks and infrastructure can be built.
Motivation
The concept of blocks is a vital part of the existing Ethereum infrastructure and, as such, the introduction of blocks
in Hedera can be considered a foundational step towards greater interoperability with EVM based tooling, explorers, exchanges and wallet providers.
Rationale
Hedera must have a single consistent answer to what a block number is, the hash and which transactions are included in that block number. This is required for two reasons:
number
and thehash
of the current block while running the EVM bytecode.Design Goal #1 Minimize Changes
The Block concept should fit naturally into the existing processes, mechanisms and state updates. It must keep the same responsibilities between consensus, services and mirror nodes.
Design Goal #2 Lightweight
The Block concept must not add a lot of complexity and performance overhead to the processing of transactions. It must have minimal impact on the TPS of the network.
Based on the described design goals above, the outlined specification defines that block properties are to be computed and populated at different points in time and by different components of the network, based on their responsibility.
Specification
Definitions
block
→Record file
containing allRecord Stream Objects
for a given time frame. Block times are to beat least
hedera.recordStream.logPeriod
seconds. The genesis block and therefore number (blockNumber=0
) isconsidered the stream start date with the first RCD file exported from
services
nodes.block number
→ consecutive number of theRecord file
that is being incremented by1
for every newRecord file
. For already existing networks, this value will be initially bootstrapped throughMirror Nodes
and after thatmaintained by services nodes.
block hash
→32 byte
prefix (out of48 bytes
) of the running hash of the lastRecord Stream Object
fromthe previous
Record File
block timestamp
→ Instant of consensus timestamp of the firsttransaction
/Record Stream Object
in theRecord file
.Platform
Adapt
TimestampStreamFileWriter
to includeblockNumber
in theRecord File
. Introduce a new fieldfirstConsensusTimeInCurrentFile
to be used as a marker when to start a new Record file. Use the fieldlastConsensusTimestamp
to keep track of the last-seen consensus timestamp that was processed. In this way, we canensure that we have at least
1000ns
difference between the last processed transaction before a new file is created.The unit of time to be used for those 2 properties is
nanos
. In this way if we have aparent transaction
with atleast one
child precompile transaction
they all will be included in the sameblock
/Record file
. Otherwise, wemight have a corner case where a
parent transaction
is included in one block, and itschild precompile transaction
falls into the next block since it will increase the
consensus timestamp
with1ns
. Therefore, the algorithm forchecking whether to start a new file would be the following:
A new
Record Stream Object
entersaddObject(T object)
inTimestampStreamFileWriter
. It has a consensus timestampT
. We create a new file only if both (1) and (2) conditions are met:T - lastConsensusTime > 1000ns
T - firstConsensusTimeInCurrentFile > 2s
or
if
lastConsensusTime
orfirstConsensusTimeInCurrentFile
isnull
Services
Services is to update the processing logic of transactions so that it supports logic for determining new record file
periods, increment
block
number and keepblock
relevant data. The proposed solution specifies along
field to beused for the block number counter, incremented every
hedera.recordStream.logPeriod
seconds. Using a signed 32-bit intwould result in the block number rolling over in 140 years (if current 2-second length is kept). Sub-second block
lengths would exhaust that number well within the operational lifetime of typical networks. A signed 64 bit integer
provides a much longer timeframe.
Pseudo-code of the record streaming algorithm
number
,timestamp
andhash
(map of the 256 most recent blocks) must be available during transaction execution since the following opcodes are to be supported as per the EVM specification:BLOCKHASH
→ Accepts the blockNUMBER
for which to return thehash
. Valid range is the last256
blocks (not including the current one)NUMBER
→ Returns the current block numberTIMESTAMP
→ Returns the unix timestamp of the current blockRecord File
Record file version is to be updated to
6
. Newlong
property is to be encoded after theEnd Object Running Hash
and will be
long
blockNumber
. It is required forservices
to propagate this property tomirror nodes
since thereare
partial mirror nodes
, that don’t keep the full history from thefirst record file
. Due to that, they are unableto calculate the
block number
, thus all otherblock properties
. Blockhash
andtimestamp
will not be included inthe record files, since
block hash
is thekeccak256(End Object Running Hash)
andtimestamp
is1st Record Stream Object
's consensustimestamp
.With the introduction of the new version of
Record Stream Object
s, the respective libraries for state proofs must be updated:Mirror Nodes
Based on the updates specified above, the record stream objects will pass all the necessary information for Mirror Nodes to build
Record files
/blocks
and store enough data about them to be able to answer all block related queries outlined above.To do this they will need to read and store the
number
specified in theRecord file
. Forold record files, they will be derived through a migration process as specified below.
Historical State / Migration
Record files
prior to the introduction of the block properties will not expose the required information.mirror nodes
that have full sync ontestnet
/mainnet
networks will export a file containing mapping of record fileconsensus end to block number in compressed CSV format:
The file would be generated by executing the SQL
select concat(consensus_end, ',', index) from record_file order by consensus_end asc
via psql from
mainnet
andtestnet
databases. It will be taken from mirror nodes that have a full history from streamstart. Later, after the block HIP has been implemented, the file with all values up to the first block
that contains the block numbers will be updated.
A repeatable flyway migration would be added to correct historical record files for partial mirror nodes.
It would work as follows:
testnet
ormainnet
record_file
table.After the migration, Mirror nodes must be able to answer queries of type:
Block Properties
The following table specifies all
block
properties and at which point they will be computed. Mirror nodes musthex
encode all properties prior to exposing them through their APIs. This table defines the properties that must be returned through APIs from Mirror Nodes.0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
)0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
). It is expected to be used and populated in the future once traceability work is finalised.0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
)0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
)hash
Backwards Compatibility
The following breaking changes will be introduced with the implementation of the HIP:
BLOCKHASH
will no longer return an emptyhash
but the actualhash
of the block as per the EVM specification.NUMBER
will no longer return thetimestamp
of theblock
, but rather the properblock
number as specified in the HIP.Security Implications
How to Teach this
Reference Implementation
Initial POC in
services
:Rejected Ideas
Two iterations have been conducted prior to settling on this approach.
consensus round
. It became clear that this approach is not suitable as it implied that multiple blocks are to be issued per second. That would add additional load to infrastructure providers when clients are iterating and going through blocks to query for data.consensus events
are to be defined asblocks
. The proposal had a much higher cognitive load (1 block = 1 record file is easier to grasp) and it required more changes to theplatform
in order to be implemented. To add on top of that, the same drawback as thehigh frequency
blocks was present as well.Open Issues
References
Copyright/license
This document is licensed under the Apache License, Version 2.0 --
see LICENSE or (https://www.apache.org/licenses/LICENSE-2.0)
Beta Was this translation helpful? Give feedback.
All reactions