-
Notifications
You must be signed in to change notification settings - Fork 12
Meetings
- Q: Should mempool keep future transactions?
- In Cosmos, the mempool doesn’t keep future transactions. It just rejects them. It's the application’s responsibility to resend the transactions.
- Currently, the Foundry mempool maintains
CurrentQueue
andFutureQueue
. It’s the mempool’s responsibility to read sequence numbers and move transactions fromFutureQueue
toCurrentQueue
. - Conclusion: we will do it in Cosmos style - it will be simpler - Applications can manage their own
FutureQueue
s. - To do this, Application needs interfaces to
- read/write memory, which is not a part of the blockchain state (to manage
FutureQueue
). - add transactions to the mempool.
- read/write memory, which is not a part of the blockchain state (to manage
- However, we need further discussion regarding this interface.
If we don't implement FutureQueue, we cannot propagate future transactions.
Q. Is it hard to implement the propagation functionality? Do we have to propagate future transactions?
- Maybe we can create a separate pool
Do we need FutureQueue?
-
Advantage of FutureQueue
- prevent delay caused by sequential transactions
-
Disadvantage of FutureQueue
- It's hard to maintain FutureQueue
- Maybe we don't have to maintain FutureQueue with Account model
First requirement
"Sequentially dependent transactions must be able to inserted into a single block"
- Light client requires this as well
- "Sequentially dependent transactions must be able to be inserted into a single block" can be solved by loosening CheckTx
- it's the application developer's responsibility to design CheckTx
- We can solve the problem of ordering txs which have sequential dependencies by introducing the TransactionOrder encoding scheme when calculating priorities.
- We can encode
TransactionOrder
struct into aPriority
field - Or we can use
TransactionOrder
type directly- In this case, MemPool can further optimize its block creation logic.
- We can encode
We concluded to implement a bulk version of CheckTx, which accepts the whole mempool as the input, and returns a list of Txes to be included in a block and a list of Txes to be remained after filtering.
We should keep a simple, stateless version of CheckTx to guard the mempool from spam Txes.
- This CheckTx will return a
bool
value only.
We decided to use this interface because it's quite hard to order transactions with a single priority
field of CheckTx.
There are three candidates:
-
Run (lightweight) CheckTx for every Tx in mempool for each block
- Pros: application does not have to know about stateless information (mempool)
- Cons: there might be a performance problem
-
Application calls
removeTx(TxHash)
directly- Cons: application has to keep mempool data by itself
- Pros: probably faster
-
(Discussed after the meeting) Application calls some bulk version of
remove_old_txes
- Input will be the whole mempool Txes, and output will be Txes after filtering
we will choose #3 because it's consistent with our 'bulk CheckTx' design
We may have a separate interface for evidence handling (ex. report_evidence
ABCI), but current execute_block
with evidences
parameter should be sufficient.
In this case, the next proposer will be rewarded as the informant of the evidences, and we concluded that this will be sufficient.
Requirement: Txes should be ordered based on
- correctness (ex. seq)
- efficiency (to maximize fee)
How can we order transactions from different modules?
- Tx from some module can be dependent on a Tx from different module
- coordinator doesn't have enough information to order Txes from all modules
Conclusion: we need to investigate how Cosmos handles this issue
- We need to serialize DB values since they will be used by modules of various languages
- We should rename
moduleID
, because it's DB-specific ID - Disk writing and merkle root computing in
Commit
interface should be separated into two interfaces - DBContext
- should we expose subspace to application?
- What should be happened if application uses a wrong
moduleID
?- Coordinator can enforce some rule to prevent this
-
revert
should be exposed to applications- Applications should be able to execute transactions by their own, and revert to previous state if it fails.
- We should have a look at
HashDB
- We can have a seminar on proc_macro
-
coordinator
interface does not have to be changed to support read-only access to module DB- module should be responsible for wrapping data for such uses
- We don't need interface for checkpointing individual storage. Global checkpointing is better
-
DbContext
needsdelete
interface -
DbContext
should havehas
interface for optimization
-
remove_old_transaction
interface should be changed- It should return a list of tags for each input transactions
- tags
- should remain in the mempool
- became invalid
- valid, but should be removed from the mempool because of low priority
- 2 and 3 should be separately informed in order to support mempool level optimizations
- tags
- It should return a list of tags for each input transactions
- To separate the host from modules, we can work on removing application-level code from
core
- We can call the remaining part as the host
- We may skip some e2e tests during mold development to speed up
-
mold
branch should be rebased with master
We will remove context
field from the coordinator.
Transaction execution ABCIs(open_block
, execute_transactions
, and close_block
) should pass context
as an argument.
The host should provide the application get_state
interface so that the application can answer to RPC calls.
Transaction signature verification should be done in the host side
- Applications can be simpler
- When we verify multiple signatures at once in the host side, there are some chances of optimization.
Some fields of transactions are shared by multiple modules.
How can we handle this?
- module hierarchy (a single super module sees seq of all transactions)
But this is too complex design for solving transaction ordering problem.
It's better to have a special module for transaction ordering
Signature verification means nothing more than computation to the host, so there is no reason to keep signature verification in the host side.
For flexibility, we should move this to the application side.
Modules should be able to manage storage by their own.
It's better for modules to to use builtin data structure in their languages.
To support this, we should provide some interface to modules.