Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel transaction execution #2919

Closed
wants to merge 11 commits into from
Closed

Parallel transaction execution #2919

wants to merge 11 commits into from

Conversation

Jim8y
Copy link
Contributor

@Jim8y Jim8y commented Sep 26, 2023

Typically transactions are excuted and verified in sequential, with this pr, neo can run transactions parallelly.

However, this will require nodes to have a bigger memory.

even if not enabled for consensus, this will speed up the sync process.

will test for performance comparison.

@shargon
Copy link
Member

shargon commented Sep 26, 2023

It must be a parallel deterministic execution

@Jim8y
Copy link
Contributor Author

Jim8y commented Sep 26, 2023

It must be a parallel deterministic execution

@shargon for the execution no, for the result yes.

@shargon
Copy link
Member

shargon commented Sep 26, 2023

For both:

One tx burn, other tx transfer... it must be deterministic because the result could be different

Comment on lines 424 to +435
foreach (TransactionState transactionState in transactionStates)
{
Transaction tx = transactionState.Transaction;
using ApplicationEngine engine = ApplicationEngine.Create(TriggerType.Application, tx, clonedSnapshot, block, system.Settings, tx.SystemFee);
engine.LoadScript(tx.Script);
transactionState.State = engine.Execute();
if (transactionState.State == VMState.HALT)
DataCache txSnapshot = snapshot.CreateSnapshot();
var task = OnExecuteTransactionAsync(system, block, txSnapshot, transactionState);
tasks.Add(task);
}

// var readSet = new HashSet<StorageKey>();
var writeSet = new HashSet<StorageKey>();

var randomUsed = false;
foreach (var task in tasks)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you combine these two loops? I see you using OnExecuteTransactionAsync to do the loading. But I think if you were to batch or even just have a callback method for OnExecuteTransactionAsync you should be good and maybe speed up some more with one loop instead of two loops. Or you could use Task.WhenAll to keep sync.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are different, parallel execution, sequential commit.

@Jim8y
Copy link
Contributor Author

Jim8y commented Sep 26, 2023

For both:

One tx burn, other tx transfer... it must be deterministic because the result could be different

Indeed, thus i added readSet and writeSet to detect such conflicts, preventing state disorder.

Copy link
Contributor

@roman-khimov roman-khimov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Let us all remember Support for Parallel Contract Validation & Execution #546.
  2. My take on this is still pretty close to Invoke contracts' payable method on asset Transfers #2012 (comment), better do this with additional in-transaction data.
  3. My expectation from this particular approach would be close to zero performance impact with regressions in some cases.
  4. When we're talking mainnet/testnet sync, they don't have a lot of transactions in blocks.
  5. When we're talking benchmarks, C# node is slow at accepting transactions into mempool (I don't think a lot has changed since Add an auto-clean bloom filter #2606 (comment)).
  6. But I can be wrong, as usual, so please at least try to sync mainnet/testnet and then some special set of blocks with lots of independent transactions, just to see if there is any impact in the best scenario for this approach.

{
var (transactionState, txSnapshot, executed) = task.Result;

if (txSnapshot.GetReadSet().Overlaps(writeSet) || (randomUsed && txSnapshot.isRandomNumberCalled))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if I have tx A reading nothing and writing into K some V1 and tx B reading nothing and writing into K some V2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then tx B will reexecute

Copy link
Contributor Author

@Jim8y Jim8y Sep 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will, this is a feature anyway, for potential ecosystem boost~. I know this is useful only if we push to the neo limit. In reality, of not much use considering those 0 tx blocks.

clonedSnapshot.Commit();
DataCache tmp = snapshot.CreateSnapshot();
var task2 = OnExecuteTransactionAsync(system, block, tmp, transactionState);
tasks.Add(task2);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why adding to the list if you're waiting for the result below anyway?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parallel execution, sequential commit.

In this way, we can start processing commit in the original order on txs that are alrady finished without waiting for the whole execution to finish.

@Jim8y Jim8y closed this by deleting the head repository Oct 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants