Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: --send-all option #23

Open
wants to merge 3 commits into
base: dev-0.3.17
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 21 additions & 20 deletions cli/src/modules/guide.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Before you start, you must configure the default network setting. There are currently
3 networks available. `mainnet`, `testnet-10` and `testnet-11`. If you wish to experiment,
you should select `testnet-11` by entering `network testnet-11`
Before you start, you must configure the default network setting. There are currently
3 networks available: `mainnet`, `testnet-10`, and `testnet-11`. If you wish to experiment,
you should select `testnet-11` by entering `network testnet-11`.

The `server` command configures the target server. You can connect to any Rusty Spectre
The `server` command configures the target server. You can connect to any Rusty Spectre
node that has wRPC enabled with `--rpclisten-borsh=0.0.0.0`. If the server setting
is set to 'public' the node will connect to the public node infrastructure.
is set to 'public', the node will connect to the public node infrastructure.

Both network and server values are stored in the application settings and are
used when running a local node or connecting to a remote node.
Expand All @@ -13,43 +13,44 @@ used when running a local node or connecting to a remote node.

`wallet create [<name>]` Use this command to create a local wallet. The <name> argument
is optional (the default wallet name is "spectre") and allows you to create multiple
named wallets. Only one wallet can be opened at a time. Keep in mind that a wallet can have multiple
accounts, as such you only need one wallet, unless, for example, you want to separate wallets for
named wallets. Only one wallet can be opened at a time. Keep in mind that a wallet can have multiple
accounts, as such you only need one wallet unless, for example, you want to separate wallets for
personal and business needs (but you can also create isolated accounts within a wallet).

Make sure to record your mnemonic, even if working with a testnet, not to lose your
Make sure to record your mnemonic, even if working with a testnet, so as not to lose your
testnet SPR.

`open <name>` - opens the wallet (the wallet is open automatically after creation).
`open <name>` - Opens the wallet (the wallet is open automatically after creation).

`list` - Lists all wallet accounts and their balances.

`select <account-name>` - Selects an active account. The <account-name> can be the first few letters of the name or id of the account.
`select <account-name>` - Selects an active account. The <account-name> can be the first few letters of the name or ID of the account.

`account create bip32 [<name>]` - Allows you to create additional HD wallet accounts linked to the default private key of your wallet.

`address` - shows your selected account address
`address` - Shows your selected account address.

Before you transact: `mute` option (enabled by default) toggles mute on/off. Mute enables terminal
output of internal framework events. Rust and JavaScript/TypeScript applications integrating with this platform
output of internal framework events. Rust and JavaScript/TypeScript applications integrating with this platform
are meant to update their state by monitoring event notifications. Mute allows you to see these events in
the terminal. When mute is off, all events are displayed in the terminal. When mute is on, you can use 'track'
command to enable specific event notification.
the terminal. When mute is off, all events are displayed in the terminal. When mute is on, you can use the 'track'
command to enable specific event notifications.

`transfer <account-name> <amount>` - Transfers from the active to a different account. For example 'transfer p 1' will transfer 1 SPR from
the selected account to an account named 'pete' (starts with a 'p' letter)
`transfer <account-name> <amount>` - Transfers from the active to a different account. For example, 'transfer p 1' will transfer 1 SPR from
the selected account to an account named 'pete' (starts with a 'p' letter).

`send <address> <amount>` - Send funds to a destination address .
`send <address> <amount>` - Sends funds to a destination address.

`send <address> --send-all` - Sends all available funds to a destination address.

`estimate <amount>` - Provides a fee and UTXO consumption estimate for a transaction of a given amount.

`sweep` - Sweeps account UTXOs to reduce the UTXO size.

`history list` - Shows previous account transactions.

`history details` - Show previous account transactions with extended information.
`history details` - Shows previous account transactions with extended information.

`monitor` - A test screen environment that periodically updates account balances.

`rpc` - Allows you to execute RPC methods against the node (not all methods are currently available)

`rpc` - Allows you to execute RPC methods against the node (not all methods are currently available).
19 changes: 16 additions & 3 deletions cli/src/modules/send.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,26 @@ impl Send {
let account = ctx.wallet().account()?;

if argv.len() < 2 {
tprintln!(ctx, "Usage: send <address> <amount> <priority fee>");
tprintln!(ctx, "Usage: send <address> <amount|--send-all> <priority fee>");
return Ok(());
}

let address = Address::try_from(argv.first().unwrap().as_str())?;
let amount_sompi = try_parse_required_nonzero_spectre_as_sompi_u64(argv.get(1))?;
let priority_fee_sompi = try_parse_optional_spectre_as_sompi_i64(argv.get(2))?.unwrap_or(0);
// handle --send-all
let amount_sompi = if argv.get(1).unwrap() == "--send-all" {
// get mature balance from account
let balance = account.balance().ok_or_else(|| Error::Custom("Failed to retrieve account balance".into()))?;
let mature_balance_sompi = balance.mature;

// subtract priority fee from mature balance
mature_balance_sompi
.checked_sub(priority_fee_sompi.try_into().unwrap_or(0))
.ok_or_else(|| Error::Custom("Insufficient funds to cover the priority fee.".into()))?
} else {
// parse amount if not using --send-all
try_parse_required_nonzero_spectre_as_sompi_u64(argv.get(1))?
};
let outputs = PaymentOutputs::from((address.clone(), amount_sompi));
let abortable = Abortable::default();
let (wallet_secret, payment_secret) = ctx.ask_wallet_secret(Some(&account)).await?;
Expand All @@ -40,7 +53,7 @@ impl Send {

tprintln!(ctx, "Transaction sent - {summary}");
tprintln!(ctx, "\nSending {} SPR to {address}, transaction IDs:", sompi_to_spectre_string(amount_sompi));
// tprintln!(ctx, "{}\n", ids.into_iter().map(|a| a.to_string()).collect::<Vec<_>>().join("\n"));
tprintln!(ctx, "\n{}\n", _ids.into_iter().map(|a| a.to_string()).collect::<Vec<_>>().join("\n"));

Ok(())
}
Expand Down
142 changes: 70 additions & 72 deletions consensus/src/processes/Parallel Processing.md
Original file line number Diff line number Diff line change
@@ -1,108 +1,106 @@
# Parallel Block Processing

A design document intended to guide the new concurrent implementation
of header and block processing.
A design document intended to guide the new concurrent implementation of header and block processing.

## Sequential processing flow (in go-spectred)

Below we detail the current state of affairs in *go-spectred* and
discuss future parallelism opportunities. Processing dependencies
between various stages are detailed in square brackets [***deps; type***].
Below we detail the current state of affairs in _go-spectred_ and discuss future parallelism opportunities. Processing dependencies between various stages are detailed in square brackets [***deps; type***].

### Header processing

* Pre pow (aka "*header in isolation*" -- no DB writes to avoid spamming):
* block version
* timestamp not in future
* parents limit (>0 AND <= limit)
* Pow:
* parents not "virtual genesis"
* parent headers exist [***headers; read***]
* (returns either invalid parent error or missing parents list)
* stage parents at all levels (stages topology manager; drops missing parents from level > 0; uses virtual genesis if no parents) [***relations; write***]
* verify parents are antichain (reachability manager DAG queries) [***reachability; read***]
* verify block is in pruning point future (uses reachability queries on parents) [***reachability; read***]
* check pow of block (against block declared target)
* check difficulty and blue work
* run GHOSTDAG and stage [***reachability; read*** | ***ghostdag; write***]
* calculate DAA window and stage; compute difficulty from window; [***windows; read | write***]
* verify bits from calculated difficulty
* Post pow (aka "*header in context*"):
* validate median time (uses median time window) [***windows; read***]
* check mergeset size limit (could be done following GHOSTDAG)
* stage reachability data [***reachability; write***]
* check indirect parents (level > 0) [***headers | relations | reachability; read***]
* check bounded merge depth [***reachability; read | merge root store; write | finality store; write***]
* check DAA score
* check header blue work and blue score
* validate header pruning point [***reachability | pruning store; read***]
* Commit all changes
- Pre-POW (aka "_header in isolation_" -- no DB writes to avoid spamming):
- block version
- timestamp not in future
- parents limit (>0 AND <= limit)
- POW:
- parents not "virtual genesis"
- parent headers exist [***headers; read***]
- (returns either invalid parent error or missing parents list)
- stage parents at all levels (stages topology manager; drops missing parents from level > 0; uses virtual genesis if no parents) [***relations; write***]
- verify parents are antichain (reachability manager DAG queries) [***reachability; read***]
- verify block is in pruning point future (uses reachability queries on parents) [***reachability; read***]
- check POW of block (against block declared target)
- check difficulty and blue work
- run GHOSTDAG and stage [***reachability; read*** | ***ghostdag; write***]
- calculate DAA window and stage; compute difficulty from window; [***windows; read | write***]
- verify bits from calculated difficulty
- Post-POW (aka "_header in context_"):
- validate median time (uses median time window) [***windows; read***]
- check mergeset size limit (could be done following GHOSTDAG)
- stage reachability data [***reachability; write***]
- check indirect parents (level > 0) [***headers | relations | reachability; read***]
- check bounded merge depth [***reachability; read | merge root store; write | finality store; write***]
- check DAA score
- check header blue work and blue score
- validate header pruning point [***reachability | pruning store; read***]
- Commit all changes

### Block processing

* Block body in isolation:
* verify all txs have utxo inputs
* verify block merkle root
* verify at least one tx
* verify first tx is coinbase
* verify all others are non-coinbase
* check coinbase blue score
* check txs are ordered by subnet ID
* for each tx, validate tx in isolation (includes anything that can be checked w/o context)
* check block mass
* check if duplicate txs
* check double spends
* validate gas limit

* Block body in context
* check block is not pruned (reachability queries from all tips -- relies on reachability data of current block)
* check all txs are finalized based on pov DAA score and median time
* check parent bodies exist
* check coinbase subsidy
* Stage and commit block body and block status

### Virtual-state processing (block UTXO data -- for context of chain blocks only)

* (*roughly*)
* build the utxo state for selected parent through utxo diffs from virtual
* build the utxo state for current block based on selected parent state and tx data from the mergeset
* stage acceptance data
* update diff paths to virtual
* update virtual state
- Block body in isolation:

- verify all transactions have UTXO inputs
- verify block Merkle root
- verify at least one transaction
- verify first transaction is coinbase
- verify all others are non-coinbase
- check coinbase blue score
- check transactions are ordered by subnet ID
- for each transaction, validate it in isolation (includes anything that can be checked without context)
- check block mass
- check for duplicate transactions
- check double spends
- validate gas limit

- Block body in context:
- check block is not pruned (reachability queries from all tips -- relies on reachability data of current block)
- check all transactions are finalized based on PoV DAA score and median time
- check parent bodies exist
- check coinbase subsidy
- Stage and commit block body and block status

### Virtual-state processing (block UTXO data -- for the context of chain blocks only)

- (_roughly_)
- build the UTXO state for selected parent through UTXO diffs from virtual
- build the UTXO state for the current block based on selected parent state and transaction data from the mergeset
- stage acceptance data
- update diff paths to virtual
- update virtual state

## Parallel processing -- Discussion

There are two levels of possible concurrency to support: (i) process the various stages concurrently in a *pipeline*, i.e., when a block moves to body processing, other headers can enter the header processing stage, and so on; (ii) *parallelism* within each processing "station" of the pipeline, i.e., within header processing, allow *n* independent blocks to be processed in parallel.
There are two levels of possible concurrency to support: (i) process the various stages concurrently in a _pipeline_, i.e., when a block moves to body processing, other headers can enter the header processing stage, and so on; (ii) _parallelism_ within each processing "station" of the pipeline, i.e., within header processing, allowing _n_ independent blocks to be processed in parallel.

### Pipeline concurrency

The current code design (*go-spectred*) already logically supports this since the various processing stages were already decoupled for supporting efficient IBD.
The current code design (_go-spectred_) already logically supports this since the various processing stages were already decoupled for supporting efficient IBD.

### Header processing parallelism

If you analyze the dependency graph above you can see this is the most challenging part. For instance, we cannot easily create multiple staging areas in parallel, since committing them with out synchronization will introduce logical write conflicts.
If you analyze the dependency graph above, you can see this is the most challenging part. For instance, we cannot easily create multiple staging areas in parallel, since committing them without synchronization will introduce logical write conflicts.

#### **Natural DAG parallelism**

Throughout header processing, the computation naturally depends on previous output from parents and ancestors of the currently processed header. This means we cannot concurrently process a block with its ancestors, however we can concurrently process blocks which are parallel to each other in the DAG structure (i.e. blocks which are in the anticone of each other). As we increase block rate, more blocks will be mined in parallel -- thus creating more parallelism opportunities as well.
Throughout header processing, the computation naturally depends on previous output from parents and ancestors of the currently processed header. This means we cannot concurrently process a block with its ancestors, however, we can concurrently process blocks that are parallel to each other in the DAG structure (i.e., blocks which are in the anticone of each other). As we increase block rate, more blocks will be mined in parallel thus creating more parallelism opportunities as well.

This logic is already implement in `pipeline::HeaderProcessor` struct. The code uses a simple DAG-dependency mechanism to delay processing tasks until all depending tasks are completed. If there are no dependencies, a `rayon::spawn` assigns a thread-pool worker to the ready-to-be processed header.
This logic is already implemented in the `pipeline::HeaderProcessor` struct. The code uses a simple DAG-dependency mechanism to delay processing tasks until all depending tasks are completed. If there are no dependencies, a `rayon::spawn` assigns a thread-pool worker to the ready-to-be processed header.

#### **Managing store writes**

Most of DB writes during header processing are append-only. That is, a new item is inserted to the store for the new header, and it is never modified in the future. This semantic means that no lock is needed in order to write to such a store as long as we verify that only a single worker thread "owns" each header (`DbGhostdagStore` is an example; note that the DB and cache instances used therein already support concurrency).
Most of the DB writes during header processing are append-only. That is, a new item is inserted into the store for the new header, and it is never modified in the future. This semantic means that no lock is needed to write to such a store as long as we verify that only a single worker thread "owns" each header (`DbGhostdagStore` is an example; note that the DB and cache instances used therein already support concurrency).

There are two exceptions to this: reachability and relations stores are both non-append-only. We currently assume that their processing time is negligible compared to overall header processing and thus use serialized upgradable-read/write locks in order to manage this part. See `pipeline::HeaderProcessor::commit_header`.
There are two exceptions to this: reachability and relations stores are both non-append-only. We currently assume that their processing time is negligible compared to overall header processing and thus use serialized upgradable-read/write locks to manage this part. See `pipeline::HeaderProcessor::commit_header`.

Current design should be benchmarked when header processing is fully implemented. If the reachability algorithms are a bottleneck, we can consider moving reachability and relations writes to a new processing unit named "Header DAG processing". This unit will support adding multiple blocks at one call to the reachability tree by performing a single reindexing for all (can be easily supported by current algos).
The current design should be benchmarked when header processing is fully implemented. If the reachability algorithms are a bottleneck, we can consider moving reachability and relations writes to a new processing unit named "Header DAG processing". This unit will support adding multiple blocks at one call to the reachability tree by performing a single reindexing for all (can be easily supported by current algorithms).

### Block processing parallelism

Seems straightforward.

### Virtual processing parallelism

* Process each chain block + mergeset sequentially.
* Within each such step:
* txs within each block can be validated against the utxo set in parallel
* blocks in the mergeset and txs within can be processed in parallel based on the consensus-agreed topological mergeset ordering -- however conflicts might arise and need to be taken care of according to said order.
- Process each chain block + mergeset sequentially.
- Within each such step:
- transactions within each block can be validated against the UTXO set in parallel
- blocks in the mergeset and transactions within them can be processed in parallel based on the consensus-agreed topological mergeset ordering -- however, conflicts might arise and need to be resolved according to said order.
Loading