Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atomic/Isolated/Synchronous transaction execution #497

Open
norwnd opened this issue Aug 13, 2023 · 1 comment
Open

Atomic/Isolated/Synchronous transaction execution #497

norwnd opened this issue Aug 13, 2023 · 1 comment

Comments

@norwnd
Copy link

norwnd commented Aug 13, 2023

👋 I'm creating this issue to summarise the current state of "atomic/isolated/synchronous transaction execution" topic in Near ecosystem so far.

It is based on my imperfect current understanding, so feel free to edit this description (it'll probably serve as documentation of sorts for those looking for details).

A note on terminology, below when I say "transaction" I typically mean (depending on context) "atomic/isolated transaction" - like in SQL / Ethereum / Solana transaction (not in a sense of "Near transaction" - which is merely "initiation").


There are 2 distinct ways one can look at what blockchain is:

  1. a set of standalone applications (smart contracts) that potentially can call each other APIs and essentially deal with different possible outcomes that result from such interactions (eg. one API call was successful, the other one failed, ... only some state modifications applied)
  2. a giant global programmable database with strong Atomicity and Isolation gurantees (known as ACID in traditional DB world)

At this point in time Near is clearly following the approach 1, while this discussion will explore and expand on the approach 2. Monolitic blockchains implicitly embrace the approach 2 just because they get its benefits mostly "for free", unlike the distributed system like Near where you could trade some scalability off for it. I believe these trade-offs worth discussing, documenting in one place, and potentially implementing at some point in the future.

The most fruitful discussion on this topic seems to have been championed by @nearmax and resides here, while certain relevant aspects are partially covered in #478, #480, #481.

Atomic transaction execution delivers much simpler developer UX (lower cognitive load, lower barrier of entry) and also results in less potential bugs in smart contracts.
Additionally, smart contract interoperability/composability will likely only become more important over time, it has network effects.

I'm primarily focusing on atomic aspect of transaction execution, while synchronous is synonymous with atomic in some contexts it might be misleading to equate these concepts. Transaction execution in most blockchains is atomic (meaning transaction either succeeds in applying state changes if any, or it is completely rolled back) and just happens to be synchronous (unlike what Near has - where transaction results in a tree of execution blocks = receipts, each receipt running atomically to success/failure but potentially concurrently with other receipts from different trees - which is done to shard blockchain state and paralelize execution).
In the world of atomic transactions isolation is also very important - most monolitic blockchains are really serialized transaction sequences (or maybe even serialized transaction sequences, with causality preserved). It's hard to reason about updating smart contract data without transaction serializability guarantee (and hard to implement updates correctly),

consider for example scenario where smart contract mints tokens such that the desired outcome is:

- total amount of tokens minted must not exceed, say, 1000
- the contract itself doesn't check the amount of tokens minted against that limit, instead it assumes the caller will do that check and will respect the limit (perhaps the contract doesn't even know what this limit is supposed to be)
- each smart contract call mints 1 token

if we have 999 tokens minted at some point, with less than serializability isolation guarantees and simplest "intuitive" implementation, several concurrent callers will read the current minted amount as 999 and might issue 2,3 or more mint calls in total - ending up with 1001 or more issued tokens

So, if one wants to treat blockchain as giant global programmable database he needs atomicity and isolation first, and synchronicity is mostly a nice addition to have. Synchronicity without atomicity/isolation (in a sense described here) ... in web2 analogy, would be like calling some 3rd party API from your app locking out everyone else from that same call - which is a niche use-case.

From devX perspective, asynchronous smart contract calls make little sense in atomic/isolated transactions world because synchronicity is pretty much expected, I think, hence it would be better to make atomic/isolated transactions synchronous (essentially getting rid of promises), however async execution within same single transaction is also possible - it'll still leave some concurrency pitfalls open for developers. Current non-atomic asynchronous execution model can also remain, but shouldn't reveal itself to developers unless they specifically want to use it (defaults should be newbie-friendly), power-user can call smart contracts through non-atomic transactions if contract specifically declared itself "unsafe" (this is necessary because today smart contract developer is usually writing smart contract in such a way that he can abort execution at any point in time and any state changes he made will be rolled back - this is what happens when smart contract is called as a part of atomic transaction, it is very convenient and almost expected feature). Likely, Near smart contracts will have to choose between two non-overlapping worlds: "safe" world of atomic/isolated transactions or "unsafe" asynchronous world because free interoperability between these doesn't seem viable (perhaps additional abstraction layer could be built later on to bridge the two).

I'm not really suggesting a readily-available solution (there are no easy ones), but I'd like to provide some avenues to explore (the two-phase commit approach seems very promising to me):

Naive MVCC-like design I've considered, just to demonstrate how bottlenecked it gets
  • receipts are applied asynchronously (across different shards) building v2 of the current v1 data version, v1 data version is being served in reads until we COMMIT to v2 (or ROLLBACK to v1); thus we get snapshot isolation (not even serializable transactions)
  • each data being updated to v2 is locked for write (locally, within data-shard) and concurrent attempts to lock it for write result in errors (hence, occasionally, receipt application with concurrent write to the same data will fail, and need to be retried until runs out of gas or maybe whole transaction should fail right away for system to keep progressing)
  • once ALL receipts across ALL shards have finished execution transaction is ready to either COMMIT or ROLLBACK (perhaps, as an optimisation, rollback can be initiated as soon as any of the receipts fail)
  • to COMMIT or ROLLBACK v2 changes, ALL data pieces across ALL shards must switch from v1 to v2 in a blink of an eye (as a single operation, such that outside observer can't possibly see v2 state in shard A and v1 state in shard B - because that would violate snapshot isolation property most applications desire in transactional systems); this can be done under briefly held global lock but is prohibitively expensive in distributed systems (especially since it must also work with malicious validators as well); Additionally, to achieve snapshot isolation property we must not allow ANY reads of the data being in the process of change from v1 to v2 (until the process of change is finished), which means locking for reads on that same global lock or outright rejecting reads while global lock is held for write
  • after successful COMMIT or ROLLBACK, write locks held locally by each shard can be released (effectively allowing next writer to act on the data that was locked for the duration of previous transaction)
CockroachDB-like "Lock-Free Distributed Transactions" layer

Based on the limited research I've done so far, it seems Near sharding architecture maps somewhat well onto CockroachDB architecture (with Near shard being equivalent to Raft-based distributed key-value storage, upon which "Lock-Free Distributed Transactions" layer is built), however I'm sure plenty differences will show up once someone goes deeper and tries to merge the two. Still, CockroachDB is somewhat of a simple gold standard to compare against in terms of atomic/isolated transaction execution implemented in distributed system).

Pros:

  • provides full atomicity and isolation guarantees in the way most developers want or even expect (simplifies smart contract development to the highest degree possible, improves default smart contract security)
  • ...

Cons:

  • likely, it's a hard to implement protocol change
  • user might need to retry his transaction (because it was aborted to resolve contention), possibly multiple times, and he might be charged for the gas spent on those retries; it doesn't seem too bad compared to the upside, many databases in web2 world can drop transactions in similar manner and transaction rebroadcasting is a thing in blockchain world too (and can be automated to some extent)
  • for application devs/users, congestion could be a problem (higher transaction latency, lower blockchain bandwidth) - although, it can be designed around
  • I'm not sure whether it's possible (or how easy it is) to implement this approach such that it interoperates with the current one (such that developer chooses one of: atomicity+isolation or performance)
  • ...

Two-phased commit, based on smart contract robustness

It seems the following 2 properties can be leveraged to our advantage:

  • smart contract call (~ network request) can't unpredictably fail due to timeout/routing/... - it's 100% reliable (unlike network in web2)
  • smart contract execution itself cannot unresponsively crash or lose data (it can fail in predictable ways, e.g. running out of gas, but that can be handled in the same manner as business logic failure)

How two-phase commit protocol maps on Near Receipt-execution model:

  • transaction consists of 2 phases ("propose changes", "apply(commit/rollback) changes")
  • "propose changes" phase executes receipts as Near currently does, except it doesn't apply the change immediately (it stores the diff and waits for the 2nd phase), it also locks the affected data for write/read access (restricting read access is necessary to get from snapshot isolation to serializable, acquiring lock for read allows for multiple readers + 0 writers while acquiring lock for write allows 1 writer + 0 readers to operate on data at the same time), any other transaction that tries to read/write the same data will abort (or maybe wait-retry in a loop, however that might be a recipe for deadlocking until retries run out) similar to how out of gas will abort
  • the outcome of each proposed receipt gets aggregated and returned back to the original transaction initiator (similar to how promise gets resolved), and once initiator gets hold of ALL outcomes "apply changes" phase kicks in to commit or rollback this transaction (if ALL outcomes are success it's a commit - rollback otherwise)
  • since we want to achieve serializable (not snapshot) transaction isolation, we must commit ALL changes across different shards simultaneously (each shard already "has" the changes - we only need to flip a switch on ALL of them at the same time, but do so in byzantine fault tolerant way = thorough Near consensus); global locking is bad option, so we could instead send out "commit-receipts" to all affected shards and they'll wait out a "safe" amount of time before committing the changes (what "safe" means might not be very well defined, if there aren't any Near-protocol-guaranteed upper time bound on how long it takes for receipts to get, from the moment receipt is produced, into chunk-producer receipt-pool we could just wait out, say, 5 seconds and maybe sometimes lose serializable property but still get guaranteed snapshot isolation)
  • applying commit-receipts must happen BEFORE applying any other receipts, this way outside observer (eg. another smart contract executing its receipt, reading some state we are about to commit) always sees the effects of commit immediately when executed in the same chunk (preserving serializable transaction isolation property)
  • essentially, the two previous points ^ mean we want to commit all changes initiated by a particular transaction in exactly 1 block (each chunk committing changes before processing other receipts it has, for affected chunks) - I'm not sure whether there is some mechanism in Near consensus to hook into to accomplish this - if there is, it's best to leverage it probably; perhaps a primitive to achieve that can be built as follows: to ensure each chunk contains ALL commit-receipts chunk-producer just needs to compare commit-receipts against propose-receipts (each propose-receipt must have exactly 1 corresponding commit-receipt, so chunk-producer will just wait until he has them all) and must propose chunks that either contain ALL or NONE commit-receipts related to a particular transaction-commit, to ensure each block contains ALL chunks related to particular transaction-commit (for each such transaction) block-producer must compare the number of chunks that reference that transaction (in his block-candidate) against certain number N present within each commit-receipt (this number N is set by "apply(commit/rollback) changes" phase-initiator and is built by said initiator from data relayed to initiator as the result of "propose changes" phase with outcomes of each proposed receipt (which is receipt execution result, basically), assuming outcome knows the ID of the shard where corresponding receipt was executed - aggregated result will contain IDs of affected shards); ideally chunk-producer and block-producer should prioritize transactions ready for commit, need to investigate deeper on how to achieve that though (perhaps through incentives to reward such behavior)
  • as a part of applying commit-receipts process, write/read locks acquired locally on each shard (protecting access to the data being modified by atomic/isolated transaction) must be released, one caveat with these locks is they must not block/prevent execution of receipts derived from the same transaction when accessing same data multiple times cause we don't want a self-deadlock and there is no need for such locking anyway (means, receipts must know which transaction they belong to)
  • rollback-receipts, unlike commit-receipts, don't need to be applied in sync with each other (since it's basically just a cleanup), but it's preferable to apply rollback-receipts as soon as possible (to release locks => minimise contention)
  • applying commit/rollback-receipts must not consume any gas because it's a system primitive that can't be allowed to fail
  • to align the incentives and cover the cost, user will be charged higher price on his gas (say, x2 or x3, perhaps this coefficient can be measured and adjusted dynamically based on current contention/congestion level) for these atomic/isolated transactions - which also creates a natural congestion-penalizing force (when blockchain fees gets high users are less likely to keep wanting to pay x2/x3 of that)

Pros:

  • based on my limited high-level understanding, it could be a relatively simple protocol change that builds pretty well on top of functionality Near already has
  • provides full atomicity and isolation guarantees in the way most developers want or even expect (simplifies smart contract development to the highest degree possible, improves default smart contract security)
  • app developer can choose between code simplicity / higher performance + lower transaction costs

Cons:

  • for application devs/users, congestion could be a problem (higher transaction latency, lower blockchain bandwidth) - although, it can be designed around
  • for Near as protocol, higher blockchain cpu/memory/bandwidth/storage usage - although, it's compensated by more costly gas
  • user might need to retry his transaction (because it can be aborted to resolve contention), possibly multiple times, and he might be charged for the gas spent on those retries; it doesn't seem too bad compared to the upside, many databases in web2 world can drop transactions in similar manner and transaction rebroadcasting is a thing in blockchain world too (and can be automated to some extent)
  • those who opt out of atomic/isolated transactions (assuming they are a default) while get better performance might still be affected during congested times to some extent, it seems favouring atomic/isolated transactions when locking resources is better for the system overall; maybe not be applicable if smart contracts are split into independent "safe" and "unsafe" categories

Many equate transaction with atomicity+isolation, I feel it's too important to give up. Blockchain concept itself was conceived around preserving/serving/catering to valuable data, giant global programmable database seems like unavoidable endgame (if smart contracts get set of standalone applications treatment ... it's not much different than having Cosmos-like app chains or Ethereum L2s).

@norwnd norwnd changed the title Atomic/isolated/synchronous transaction execution Atomic/Isolated/Synchronous transaction execution Aug 13, 2023
@norwnd
Copy link
Author

norwnd commented Aug 13, 2023

unlike what Near has - where transaction results in a tree of execution blocks = receipts, each receipt running atomically to success/failure but potentially concurrently with other receipts from different trees

not sure, though, if receipts from same tree (born from single transaction) can execute concurrently between themselves

I hope I got that ^ right because it isn't that obvious from just reading this doc (I suggest to rephrase/extend it somehow, maybe with my tree analogy).

On a similar note, I'm not sure "Wasm suspension" approach(which seems to be most preferable among those listed there) provides atomicity / isolation properties, imagine the following scenario:

contract A executes the following code:
...
-> (call to another contract) B.call_something => (success)
-> (call to another contract) C.call_something => (fail)
...

who rolls back the effects of B ? If it's up to A to negate the effects of B then it isn't really the desired out-of-the-box atomicity many have in mind. And there is no isolation, since we might read same data (in external contract) multiple times each time getting different result (since it might be updated concurrently with our transaction running) cause suspention and locking a particular contract call does not "freeze" global state.

Maybe I didn't fully understand "Wasm suspension" or Near execution model.

Update 1:

not sure, though, if receipts from same tree (born from single transaction) can execute concurrently between themselves

I hope I got that ^ right because it isn't that obvious from just reading this doc (I suggest to rephrase/extend it somehow, maybe with my tree analogy).

Based on some examples I found looks like receipts in the same tree (born from same transaction) can indeed execute concurrently with themselves. That doesn't change anything, just a clarification.

Update 2:

Added detailed, potentially promising two-phase commit approach to the list above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: NEW❗
Development

No branches or pull requests

1 participant