Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

Fix DPOS Loss of Consensus due to conflicting Last Irreversible Block #2718

Closed
bytemaster opened this issue May 3, 2018 · 35 comments
Closed
Milestone

Comments

@bytemaster
Copy link
Contributor

This paper describes an improvement on the DPOS (Delegated Proof of Stake) algorithm that makes stronger guarantees that a network of nodes following the DPOS 3.0 protocol will not fall out of consensus. We define falling out of consensus as two nodes concluding that two different blocks are irreversible.

Background

Proof of work blockchains, such as Bitcoin, define consensus using the “longest-chain” rule. Using this rule no block is ever considered irreversible confirmed. At any time someone could produce a longer chain built off of an older block and the node would switch. From this we can conclude that Bitcoin only offers a high-probability of irreversibility based upon the economic cost of attempting to change forks.

BitShares introduced Delegated Proof of Stake. Under this algorithm stakeholders elect block producers. Block producers are pseudo-randomly shuffled and assigned absolute time slots during which they may either producer a block or not. The blockchain with the most producers building on it will grow in length faster than a blockchain with fewer producers. Given two chains growing at different speeds, the faster one will eventually become the longest chain. Therefore, the original Delegated Proof of Stake algorithm offers similar guarantees to Bitcoin, namely that as blocks are added to a chain it is increasingly unlikely for another chain to be produced that reverses reverses a block.

The nature of the DPOS scheduling algorithm communicates a lot of information to an observer. For example, an observer can detect that they are likely on a minority chain based upon the frequency of missed blocks. With 21 producers a node can accurately detect they may be on a minority fork after just 2 consecutive missed blocks (6 seconds). This allows users to be warned when the network is unstable and to wait longer for confirmation. Likewise, if no producers have missed a block in 21 blocks that follow your transaction you can assume with certainty that your block will not be reversed.
Rare loss of Consensus (DPOS 2.0)

In DPOS 2.0 we introduced the concept of the last irreversible block. This is the most recent block which 2/3+1 of block producers have built off of. The theory is that if 2/3+1 of producers have built on a chain that confirmed a block then it is likely impossible for there to be any other fork.

That said, enterprising folks have constructed contrived scenarios where a network split divides the network into two chains. Normally this would simply cause one or both chains to halt the advancement of the last irreversible block until one or the other is reconnected with 2/3+1 of the producers. In this normal behavior everything is fine and all nodes will converge on the one-true-chain once connectivity is returned. That said, there is a race condition whereby two-subsets simultaneously switch forks resulting in both forks reaching 2/3+1 on a different block. When this happens nodes on the two sides of a fork are unable to synchronize because neither will unwind beyond the last irreversible block. Manual intervention is required.

Under this situation one or both forks will cease advancing irreversibility depending upon which whether either of the forks managed to end up with 2/3+1 of producers. The minority chain may still grow at 1/2 the speed but nodes waiting for irreversibility would no longer accept any transactions on the minority chain as finalized.

This failure mode produces a single-block which some services may experience loss if it was reversed. By our estimates, the probability of this happening is much less than the probability of a Bitcoin block with 6 confirmations being reversed. This bears out in real-world testing of both BitShares and Steem which have operated for over 3 years without this situation being observed.

DPOS 3.0 + BFT

With EOSIO we are introducing inter-blockchain communication (IBC) which allows one chain to efficiently prove to another chain that a transaction is final. Finality is critical for IBC because once one blockchain accepts a message from another it is not easy nor desirable to reverse both chains to correct the mistake.

Imagine for a moment that a blockchain was attempting to accept a Bitcoin deposit. A user submits all bitcoin blockheaders plus 6 block headers built on top of the block he references. Based upon this proof the blockchain takes irreversible actions. Now imagine that the Bitcoin forks and undoes those 6 blocks? It is not possible for this blockchain to reverse and reject the previously accepted Bitcoin transaction. Such an incident would require manual intervention and/or a hardfork. Such a hardfork/intervention would ripple through all chains that are connected. This is not a viable option.

To enable secure and reliable IBC under all non-byzantine situations DPOS 3.0 + BFT introduces a small change to how the Last Irreversible Block (LIB) is calculated. With this change we can prove that it is impossible for two nodes to come to different conclusions regarding the LIB without at least 1/3 of the block producing nodes being intentionally malicious. Furthermore, we can prove malicious behavior of even a single node.

The core idea behind DPOS is that each produced-block is a vote for all prior blocks. With this model once 2/3+ producers have built upon a particular block it has 2/3+ votes. This sounds good in theory except that it is possible for non-byzantine block producers to produce blocks on different forks at different times. When they produce these blocks they end up casting indirect conflicting votes for the block numbers that appear on each chain.

Suppose a network with producers A, B, and C has an issue that causes two block producers to lose communication for a short period of time such that producer A produces block N at time T and producer B produces block N at time T+1. Now suppose producer C breaks the tie by building block N+1 at time T+2 on top of B’s block N with time T+1. After this happens and A learns of C’s block N+1, A will switch to the longer fork. Next time A produces a block he will indirectly confirm B’s block N which conflicts with A’s previously produced block N.

Under the DPOS 2.0 rules, A’s block N would have votes from A, B, and C and therefore become irreversible due to (2/3+1) confirmation. Under DPOS 3.0 we require A to disclose that he previously confirmed an alternative block N. Due to this disclosure the network will not count A’s block as having voted for B’s block N. This leaves B’s block N with only 2 votes which is not enough to achieve irreversibility.

Under DPOS 3.0 B’s block N will never achieve direct irreversibility because it requires votes from A, B, and C to have 2/3+1 and A cast a vote on an alternative block N. Instead, block N+1 will become irreversible once A producers N+2 and B produces N+3. This would give block N+1 the 3 votes necessary to reach 2/3+1. Once C’s block N+1 is irreversible, B’s block N is deemed irreversible as well.

To implement this algorithm each block producer includes the highest block number (H) they have previously confirmed on any fork in the block header. When block N is applied only blocks in the range [H+1, N] receive votes toward irreversibility.

Any producer who signs a block with an overlapping range is deemed byzantine and will generate cryptographic proof of misbehavior.

With this information we can now generate a simple proof that for any given block height to achieve 2/3+1 votes for two different blocks at the same height that at least 1/3 of the producers must sign conflicting ranges. This would happen if there was an honest network split where two good groups of size 1/3 create two alternatives and a bad group with 1/3 signs on both forks. In practice, a network with good connectivity requires 2/3+1 to be malicious to create two different blocks deemed irreversible.

Under this these rules there are now two ways for producers to sign byzantine statements:

  1. Sign two blocks with the same block number directly or indirectly
  2. Sign two blocks with the same block time
    Honest nodes running the default software will never do these things. Therefore, it is possible to trivially punish all bad actors even for failed attempts.

Credits

The solution to this problem was discovered collaboratively by Bart, Arhag, and myself along with some other members of the B1 team.

@bytemaster bytemaster added this to the RC2B milestone May 3, 2018
@RalfWeinand
Copy link

"With 21 producers a node can accurately detect they may be on a minority fork after just 2 consecutive missed blocks (6 seconds)." .... 2 blocks = 6 seconds? that's outdated, right?

@eoseoul
Copy link

eoseoul commented May 8, 2018

Thank you for detailed explanation about DPOS 3.0 + BFT.

We have translate the paper above into Korean as reference of release notes of Dawn 4.0.

https://github.com/eoseoul/docs/blob/master/ko/translations/Fix_DPoS_Loss_of_Consensus.md

EOSeoul
https://steemit.com/@eoseoul

@vbuterin
Copy link

vbuterin commented May 15, 2018

This does not seem to actually be safe. Consider a case with four validators, so we are allowed one byzantine. Suppose before time T the commonly agreed head is Z; then, at times (T, T+1, T+2, T+3), validators (A, B, C, D) make blocks extending a chain from Z. A now has votes from B, C and D and so is finalized. Now, before timeslot T+3 ends, D also (byzantine-ly) makes a block (call it D') on top of Z. Then, at times (T+4 ... T+11), (A, B, C, D, A, B, C, D) make blocks on top of D' (this is ok because each validator is making a block at a height one higher than the block they previously made). The second A block in this chain has three votes and so is also finalized. Hence, two conflicting blocks got finalized.

In general, it's not possible to achieve BFT safety on a block without at least two messages from most nodes that directly or indirectly reference that block; this algo tries to do it in one round and it's likely impossible to actually do that safely. If you want an intuitive and good way of doing this, I recommend just using the algorithm in our Casper FFG paper: https://arxiv.org/abs/1710.09437

@ramtej
Copy link

ramtej commented May 15, 2018

Guys, can we pls take Vitalik's concerns seriously (I know we always do).
What about we take the safer option for the mainnet launch with this 2+ messages
from most nodes and possibly lose some performance but do not ruin EOS reputation just
in the beginning? We expect that sidechains will emerge very quickly and thus
IBC must behave robustly and safely under all circumstances. It is not a feature,
it is a must - and we do not want to see sidechain domino-collapses when losing
consensus.

Jiri / EOS Germany

@bytemaster
Copy link
Contributor Author

I am reviewing V's comment now.

@wanderingbort
Copy link
Contributor

In as much as this could be controversial, I will preface it with "this is my opinion" 😄

IBC must behave robustly and safely under all circumstances
(emphasis mine)

This is, practically speaking, impossible. Or rather, I'm not currently aware of any consensus algorithm that solves for all circumstances.

For instance, it is conceivable, but highly improbable, that there has been a dark side fork of Bitcoin since day one that has maintained 101% of the visible main net hash rate. At any time, such a fork could be released and "reset" the entire state of that chain. It is possible at a protocol level but not at a practical level. This vague concept carries through all PoW chains.

Likewise, crypto-economic protocols like Casper are balanced by incentives and sometimes slashing conditions. However, finality is a tricky thing and if slashing conditions occur after damage is done or if actors act irrationally in the short term but sustain the actions long enough to deceive a dependent side effect (like a side chain) then they can also not solve for all circumstances.

The proper consideration is under what conditions the system breaks down and, when it does, what is the path to restitution.

That said, In the example above, there is one error/omission/irregularity/ambiguity. Was D' built off of Z or C

if D' was built of of Z then we have easy proof of cryptographic bad behavior of D as block are numbered sequentially with no gaps and D has both confirmed block Z + 1 produced by A and Z + 1 produced by D. Furthermore, we have cryptographic proof from A, B and C that they were aware of A's version of Z+1 and any confirmation of D's Z+1 is considered bad behavior as well. In fact, A B and C cannot safely extend D's rogue chain until D, by itself, adds enough blocks such that A B and C have not confirmed blocks at that height. As D is also bound to a time slot that only comes around 1/4th as often as a block this means that A, B and C have to decide to do nothing for an extended period of time to be able to switch without also creating cryptographic evidence of wrong doing. If A, B or C do anything to extend the existing chain, they will extend the "black out" period on the rogue chain.

If D' was built from C then this doesn't seem to be an error. The two "conflicting" blocks are in the same chain and therefore not in conflict.

As each producer gets "paid" by creating blocks and risks their reputation and future income on bad behavior there are pretty strong incentives to neither commit the byzantine confirmations nor "do nothing" long enough to switch forks without byzantine confirmation. It isn't perfect because, as above, if the market is totally irrational then no amount of incentives can be levied to solve this.

So, what happens when it breaks down?

Halt and catch fire 😄

The problem with byzantine behavior is that we don't have a complete view of the extent of the issues it has created downstream in side chains. Without that information, we cannot truly or honestly determine how to restore order. Even what "order" is would be a matter of opinion. What if there is no way to restore order without harming an innocent bystander. It should be the communities role to define what happens when we halt and catch fire. It is the softwares role to minimize the odds of the worst case with the concession that it probably cannot be eliminated.

Everything after that is a discussion of tolerances.

@bytemaster
Copy link
Contributor Author

image

I have mapped this out and shown that if Z is deemed an irreversible starting point and producer A, B, C, and potentially D build on top of it such that C double-produces, that D would not switch to C' because by the time C double-produces A is already irreversible for block 101.

@bytemaster
Copy link
Contributor Author

bytemaster commented May 15, 2018

Z <- A <- B <- C
 \------------ C' <- D <- A <- B <- C

This situation could happen if A disconnects before B and C and rejoins to C' and D. For this to happen a byzantine producer would have to take advantage of a network split to trick D & A.

A cannot vote for C' which has the same block-height as A's first block, therefore C' is not irreversible.

@ramtej
Copy link

ramtej commented May 15, 2018

@bytemaster @wanderingbort - thanks for the clarifications. I'm trying to understand the various scenarios and review V's paper.

Independently of that, I think that this is also about expectation management. Users, developers and businesses fuel the expectation that if it's on the blockchain, then it's the truth.

So our job on the software side is to make such 'halt & catch fire' scenarios as rare as possible AND at the same time develop and propose pattern & principles that describes how to deal with such situations where one is sitting on a inconsistent fork.

Summary - we should communicate the theoretical & practical limits, constraints and issues
in a way that the future developers and users can understand the implications.

@bytemaster
Copy link
Contributor Author

After talking this over with Bart and Arhag we have concluded that a small tweak to our current algorithm will resolve this. Each block header will now include that "proposed LIB" which is equal to what we currently deem the LIB. Once 2/3+1 producers have confirmed a proposed LIB then it becomes the real LIB.

V. is right that you require double-confirmation and that our attempts at circuiting it result in edge case failures.

Impact on protocol: 4 bytes per block + 2.5x increase in light client proof size (from 15 sigs, to 30-37 sigs).

@bytemaster
Copy link
Contributor Author

While drafting #3088 I learned that it is not necessary to bloat the block header to declare something that is already implied by the state and signing the block.

@vbuterin
Copy link

Each block header will now include that "proposed LIB" which is equal to what we currently deem the LIB. Once 2/3+1 producers have confirmed a proposed LIB then it becomes the real LIB.

This sounds to me like "every block contains a reference to a recent ancestor which they want to potentially become finalized. Once 2/3+1 producers make a block that contains such a reference for a specific block within the same chain, then block producers in the next round can reference it again and if 2/3+1 do then it's finalized".

If so, then as long as you add source checkpoint references (see the Casper FFG paper) and add certain slashing conditions around those, then you've basically got Casper FFG.

@bytemaster
Copy link
Contributor Author

@vbuterin

We have established the following behavior can be slashed:

  1. producing two blocks for the same time stamp
  2. producing two block confirmation for same block number whether directly or indirectly.
  3. producing two bft pre-commit messages for the same block number of block timestamp

All of these are trivial to prove by producing two block headers or signed bft pre-commit messages and all of this byzantine behavior can detect a single bad actor.

Given this it is trivial to create a staking contract that will slash producers for this behavior. Absent this behavior it is impossible to advance the last irreversible block.

@wanderingbort
Copy link
Contributor

wanderingbort commented May 15, 2018

This sounds to me like "every block contains a reference to a recent ancestor which they want to potentially become finalized.

It is more like "every block implicitly indicates a set of blocks which the producer of the block automatically votes to finalize" where that implicit set is the reduction of all the explicit confirmations of blocks in the chain leading from genesis to that block as indicated in the block headers filtered such that it only contains those blocks which carry 2/3+1 confirmations.

Once, 2/3 + 1 producers create blocks where these implied sets overlap to provide 2/3+1 confirmations on finality for any block, that block and its ancestors are finalized. Napkin math suggests that a naive implementation would produce finality 4/3 + 2 producers into a healthy mono-chain network. I do wonder if we can reduce that to 4/3 + 1 but, this has been a sufficiently brain baking day.

As our single signatures end up being applicable over a large set of implied messages, I'm not sure we will see the same benefit for source checkpoint references that Capser would see WRT efficiency but, I admit I haven't run the numbers completely yet.

I would also suggest that while we don't have slashing conditions on a bond we do have equivalent deterrents to misbehavior in the economics of producer election and the catastrophic risk to future earnings that would come along with provable misbehavior. Discussions of the exact economics that deter bad behavior in a protocol like this are probably not productive as long as the basic protocol properly documents where economics must shore up a potential deficiency of the software.

@vbuterin
Copy link

We have established the following behavior can be slashed:

  • producing two blocks for the same time stamp
  • producing two block confirmation for same block number whether directly or indirectly.
  • producing two bft pre-commit messages for the same block number of block timestamp

Ah I see, so a "block confirmation" and a "bft pre-commit message" are different things? In that case, this seems like it would correspond more closely to an older version of FFG. The main advantage of our newer approach is that a confirmation for round N is simultaneously a pre-commit for round N+1, so you get ~17% lower average time to finality at basically no cost.

In any case, I am satisfied that there are plenty of reasonable protocols with safety and plausible liveness proofs in this general area, though it's important to run through the proofs for any specific instantiation to make sure the protocol can't break or get stuck.

@sunnya97
Copy link

While drafting #3088 I learned that it is not necessary to bloat the block header to declare something that is already implied by the state and signing the block.

Consider light client efficiency as well. If you include the "proposed last irreversible block" in the block header, then I can give you a succinct 2f+1 blocks headers with the same PLIB (from different proposers), to prove finalization of a block.

@kennethffx2
Copy link

I'm not 100% certain I'm following this correctly, but, am I correct to say:
EOS main net will essentially be Ethereum's Casper?
So the world is getting a Casper-clone June 1st?

When is Ethereum going to release Casper?

@bytemaster
Copy link
Contributor Author

@vbuterin we actually have redundant BFT layers, the BFT pre-commit messages are "optional" and allow one to accelerate finality to 1-2 seconds rather than waiting on pipeline finality using only the block headers.

@bytemaster
Copy link
Contributor Author

image

@el33th4x0r
Copy link

Can someone chime in on the liveness conditions implied by the proposed rules? In particular, I'm worried about the case where a node in one partition votes on a certain block, and then the partition heals to reveal that the rest of the network has adopted a conflicting block. That node's vote on the LIB may now lead to a slashing condition, and therefore it cannot participate in the protocol anymore. If there are sufficiently many such healed partitions, the network may lack the weight required to make progress, even though nodes are online and partitions have healed.

@bytemaster
Copy link
Contributor Author

An honest node will never have to worry about getting slashed as they are in complete control over the integrity of the messages they sign. No one else can cause you to inadvertently sign a slashable statement.

@bytemaster
Copy link
Contributor Author

@el33th4x0r
Copy link

Let me try to clarify. I understand that no node can be forced to equivocate. But by the nature of the rules you have in place, a node might find itself, after a healed partition, having signed block A, while the majority adopted the series A'BCD. That node now wants to add to D, but doing so would implicitly vote for A', creating a slashing condition, if I understand the rules correctly (and likely, I don't). In this case, the node is effectively prohibited from further participation in the protocol, a terrible outcome.

Can you please explain what you do to avoid this outcome? Many thanks, I appreciate your help in clarifying the protocol (and I did my part by reading the link, which didn't mention liveness or this problem).

@bytemaster
Copy link
Contributor Author

bytemaster commented May 16, 2018

First of all A' and A == a slashing condition already, no producer should produce two blocks with the same block number or timestamp and I assume that the first A' is as opposed to another A at the same time.

Second producing a block does not imply confirming all prior blocks, the block header includes the "number of prior blocks" the producer is confirming. If A was off on a minority fork and previously approved block N and N+1, then when A rejoins the real chain it would not affirm N and N+1 on the real chain, instead it would only affirm blocks N+2 or greater.

@el33th4x0r
Copy link

Ok, thank you for the clarification. So you have a limited form of selective endorsements, in the form of "k prev consecutive blocks affirmed."

What happens when there are 4 partitions (ie all minority chains), who extend Z with A, B, C and D separately. Post-partition, let's say everyone jumps on to build on A's block. B would create AB', which isn't a conflict because B' isn't at the same height and it only affirms B' but not A, right?

Then C extends the chain, making AB'C', affirming B' but not A, right? I assume the same argument applied to D.

What happens when everyone builds on a block A that will never be committed? When do we execute the transactions in block B'? Do we wait for the ancestry including A to be fully committed (which means this chain will never make progress beyond A), or do we do it as soon as B' has 2/3rds of the votes (which means we can get out of order execution on some nodes as affirmations arrive at different times on different nodes)?

BTW, It would really be good to have a tight spec of this protocol. I'm genuinely curious about EOS's solution here. There are some difficult edge cases I want to get to, but I don't yet understand the base protocol.

@bytemaster
Copy link
Contributor Author

While A might never be "committed" directly, D and later blocks might be which would indirectly confirm A.

@el33th4x0r
Copy link

Assume D is Byzantine:

  1. Why can't we have AB'C'D' (confirms B' and therefore implicitly A), a second partition immediately after C' but before D' is widely disseminated to all, which then triggers DC"A"B" (confirms C" and implicitly D)? This is a safety failure, and in general, finality seems meaningless as any decision can revert.

  2. What keeps the system from forking into two chains, ABAB... and CDCD..., never making a decision on either, but being extended at the same rate on both, with the help of the Byzantine node?

I don't mean to bombard you guys with liveness questions, but I have genuine questions and your answers have been very helpful. Is there a more cogent liveness argument you can provide? That can help avoid having to enumerate all the edge cases.

@ragjunk
Copy link

ragjunk commented May 16, 2018

It's good to see EOS employing "pipelining" approach to speed up block finalities. The "serial" approach to block production and finality can only take you so far. We @storecoin recognized this early on in developing Storecoin/BlockFin, which uses a pipelined approach for concurrent block assembly and validation. We actually have been discussing this approach with @el33th4x0r since December 2017. We are still concerned about EOS' 21 block producers (among potentially thousands of validators) creating blocks because after a network partition, it is this small subset of validators, who will take the EOS blockchain forward. In other words, blockchain's liveness and continuity depends on who the BPs are and the "view" of the blockchain they have at the time of their election after the network partition. The thing is different people interpret the failure scenarios differently and the EOS spec doesn't explicitly mention how such conditions are handled. Some tightening of spec on failures, including network partitions and how EOS recovers from such failures is useful, especially considering:

  1. the mainnet launch is 2 weeks away
  2. the suggested changes are probably not well tested in the testnet for possible side effects

Some side effects take much longer test cycles to show up and this change in finality is significant enough to provide detailed models around failure scenarios.

All the best to EOS team for the mainnet launch.

@luigiborla
Copy link

@bytemaster why do you keep teaching Vitalic how to fix his blockchain while he supposes to be better? Vitalic look: Dan's blockchains WORK , and so good, so fast and whitout fees, a dream come true.

@ghost
Copy link

ghost commented May 16, 2018

@luigiborla This is github, not reddit. Holster them shills and celebrate constructive collaboration.

@danrobinson
Copy link

@bytemaster:

We have established the following behavior can be slashed:

...producing two block confirmation for same block number whether directly or indirectly.

In this context, what does it mean to produce a block confirmation? Does this mean a "vote" for a block (in the DPOS 3.0 sense, where each block votes for all of its ancestors whose heights are greater than its specified H)? Does it mean producing a block whose implied PLIB is the voted-for block? Or something else?

And what does it mean to "indirectly" produce a block confirmation? It can't just mean confirming a subsequent block in the chain (otherwise any two confirmations on different forks, at any heights, would violate the slashing condition).

@subos2008
Copy link

I notice this issue is closed, does that mean it's solved or not an issue?

@MilotM
Copy link

MilotM commented May 17, 2018

@quantumas
Copy link

quantumas commented May 19, 2018

It might be also useful for write up detailing the security downsides of using slashing conditions that are so casually suggested here like a good thing when it's often not (implies bonds):

  • centralizing control along with pareto principle
  • further exacerbating distribution issues (especially for old ico models)
  • permanent punishment of honest disagreements or honest mistakes (e.g. punish 1 honest producer)
  • higher cost barrier to entry (bond) for (including standby) producers
  • provides more tools to attack value of coins (incentives), value of networks, and thus security on dissenting split forks by large holders who don't care about forked coins.

Additionally, most bonded designs do weighting by wealth as they result in arbitrary scaling punishment (easily rendered irrelevant through tools like leverage) resulting in centralizing uneven production "frequency arc" concentrated to a few nodes.

Dan said he was writing paper on "Bonded validation is subject to Pareto principles" last year for reference, is it out maybe under a different name?

Long term punishment of losing infinite income through votes provides incentives/punishment not present in Casper and far greater opportunity cost than any realistic slashing condition scaling factor.

Alternative to slashing conditions to consider is temporary reduction of rank through short term (stackable) multiplier (e.g. 0.95) on effective approval totals for producers that effects sorting - a multiplier that would adjust sorting of producers automatically while voters are a longer timeframe solution. Repeated violations would stack it and later, if no issue found by voters with vests at stake, they would expire & weight fully recovers. It provides an immediate response system without pareto bonds or permanent damage on any including honest players until vested voters have time to educate themselves and intervene. These can also successfully help actively rotate producers under attacks instead of waiting a full day. Withholding/withdrawing specific block rewards alone is only "slashing" maybe necessary & also no bond necessary.

@stvenyin
Copy link

OK

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests