-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel transaction execution #2919
Conversation
* 'master' of github.com:neo-project/neo: 3.6.0 (#2893)
…ui/neo into parallel-transaction-execution * 'parallel-transaction-execution' of github.com:Liaojinghui/neo: Delete .github/workflows/codeql.yml
It must be a parallel deterministic execution |
@shargon for the execution |
For both: One tx burn, other tx transfer... it must be deterministic because the result could be different |
foreach (TransactionState transactionState in transactionStates) | ||
{ | ||
Transaction tx = transactionState.Transaction; | ||
using ApplicationEngine engine = ApplicationEngine.Create(TriggerType.Application, tx, clonedSnapshot, block, system.Settings, tx.SystemFee); | ||
engine.LoadScript(tx.Script); | ||
transactionState.State = engine.Execute(); | ||
if (transactionState.State == VMState.HALT) | ||
DataCache txSnapshot = snapshot.CreateSnapshot(); | ||
var task = OnExecuteTransactionAsync(system, block, txSnapshot, transactionState); | ||
tasks.Add(task); | ||
} | ||
|
||
// var readSet = new HashSet<StorageKey>(); | ||
var writeSet = new HashSet<StorageKey>(); | ||
|
||
var randomUsed = false; | ||
foreach (var task in tasks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you combine these two loops? I see you using OnExecuteTransactionAsync
to do the loading. But I think if you were to batch or even just have a callback method for OnExecuteTransactionAsync
you should be good and maybe speed up some more with one loop instead of two loops. Or you could use Task.WhenAll
to keep sync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are different, parallel execution, sequential commit.
Indeed, thus i added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Let us all remember Support for Parallel Contract Validation & Execution #546.
- My take on this is still pretty close to Invoke contracts' payable method on asset Transfers #2012 (comment), better do this with additional in-transaction data.
- My expectation from this particular approach would be close to zero performance impact with regressions in some cases.
- When we're talking mainnet/testnet sync, they don't have a lot of transactions in blocks.
- When we're talking benchmarks, C# node is slow at accepting transactions into mempool (I don't think a lot has changed since Add an auto-clean bloom filter #2606 (comment)).
- But I can be wrong, as usual, so please at least try to sync mainnet/testnet and then some special set of blocks with lots of independent transactions, just to see if there is any impact in the best scenario for this approach.
{ | ||
var (transactionState, txSnapshot, executed) = task.Result; | ||
|
||
if (txSnapshot.GetReadSet().Overlaps(writeSet) || (randomUsed && txSnapshot.isRandomNumberCalled)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if I have tx A reading nothing and writing into K some V1 and tx B reading nothing and writing into K some V2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then tx B will reexecute
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will, this is a feature anyway, for potential ecosystem boost~. I know this is useful only if we push to the neo limit. In reality, of not much use considering those 0 tx blocks.
clonedSnapshot.Commit(); | ||
DataCache tmp = snapshot.CreateSnapshot(); | ||
var task2 = OnExecuteTransactionAsync(system, block, tmp, transactionState); | ||
tasks.Add(task2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why adding to the list if you're waiting for the result below anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parallel execution, sequential commit.
In this way, we can start processing commit in the original order on txs that are alrady finished without waiting for the whole execution to finish.
Typically transactions are excuted and verified in sequential, with this pr, neo can run transactions parallelly.
However, this will require nodes to have a bigger memory.
even if not enabled for consensus, this will speed up the sync process.
will test for performance comparison.