-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warp backend interface and implementation #452
Conversation
plugin/evm/warp_backend.go
Outdated
return fmt.Errorf("failed to add message with key %s to warp database: %w", messageHash.String(), err) | ||
} | ||
|
||
return w.Put(messageHash[:], unsignedMessage.Bytes()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to sign the message here and place the signature in the database instead of keeping the whole message? We only need to fetch the signature so no need to store the full message.
Main question to me is the performance tradeoff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup makes sense, made the change and added comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm would this create a problem if node shutsdown then BLS key changes and boots again? I don't know how this will be used but this should not assume that BLS keys will never change on the node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm ye good question, I'm not sure if during the node's shutdown, whether the backend/cache would be wiped. So that after it comes back up with a different BLS key, it'd start from scratch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cache would be wiped bu db wont be. since we started to write signatures to db they will persist between restarts, and also in the case where BLS key changes. So any old signature will be available for the node even if the node would not be able to sign the bytes due to key change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is definitely a valid concern. I think there are few considerations here with respect to storing the signature rather than the message (or message hash):
- An honest node should never return an invalid signature. This could happen if we store signatures and then the BLS key changes. If we store signatures in the DB we'll need to be careful to invalidate them when we go to use them after changing a BLS key.
- It's not the end of the world if an honest node is unable to return a signature for an older message after it's BLS key has changed. We only need 67% of nodes to sign for any given message.
- Ideally, we'd avoid re-signing messages unnecessarily.
Considering all of that, I think the cleanest option is to store a hash of each message, where the hash is what will be signed to generate the BLS signature of a message. We can still cache signatures that are generated since the cache will be cleared the restart if the BLS key changes. And we have the benefit of not needing to store arbitrary size messages to disk.
I think the simplicity of that approach makes it more desirable than storing the signature with some identifier of which key was used to generate the signature, and then checking that the key is still the current key when returning a signature from the DB.
Curious to hear other's thoughts though (cc @aaronbuchwald, @ceyonur, @minghinmatthewlam)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the approach in terms of the most recently added messages will already have their signatures saved in the cache, and won't need to call on the db to resign the hash to generate a signature. Also, this would be simpler than having some identifier to check whether the saved signature in the db matches the current key of the node.
I agree with point 2, but considering different sized subnets/nodes with big % stake in subnets, might be good to avoid this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes much sense to use hashes for signatures rather than the full messages. In that case I think we would not need to store anything in the DB but just in the cache, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should still store the hashes of messages that the node is willing to sign in the database I think. Otherwise if a node restarts (even without changing its BLS key) it will be unwilling to sign any messages that were submitted beforehand, which would result in more messages getting dropped entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this offline and ended up with going back to saving the UnsignedMessage
in the database. This is due to avoid signing a message hash, cost will be proportional to message save, and we will implement some form of periodic cleanup where saving hash vs message won't make a big difference.
plugin/evm/warp/backend.go
Outdated
return sig.([]byte), nil | ||
} | ||
|
||
signature, err := w.db.Get(messageID[:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the likelihood that a previously evicted entry in the cache will need to be accessed again? If the likelihood is low, we may be better off not reinserting it in the cache. I'm not sure what the expected access pattern will be.
Wanted to note a further cleanup option we could implement here that I discussed with @aaronbuchwald yesterday:
It would be great then if this database only stored message hashes for a period of time and automatically deleted them afterwards so as to not have this database grow unbounded in size. We could implement this by keeping 2 indices in the database: one storing just the message hashes, and the other mapping a prefix with a timestamp to the message hash. Then every so often we could iterate over the prefix-timestamp keys and delete any message hashes more than 7 days old. I don't think we need to implement this in this PR necessarily but wanted to get everyone's thoughts |
Agreed this is the right approach in the long term. I don't think this needs to be a hard requirement for this to get merged or even v1 of this work, but doing it this way the first time it's in use would save us a database migration process later, so it would be good to include. |
plugin/evm/warp/backend_test.go
Outdated
require.Equal(t, expectedSig, signature) | ||
} | ||
|
||
func TestWarpBackend_InvalidMessage(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add one more test case with a cache of size 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after addressing the last few comments
Also should call out that it will require reference counting the message hashes, since they could occur multiple times and we don't want to delete a hash from a week ago if the same hash was just re-submitted. Definitely agree on waiting for a separate PR to implement this. 👍 |
If we don't need fine-grained pruning of expired entries, a generational pruning approach may be cheaper. Example: we have a separate new and old "table". New entries are added to the new table. Every 7 days, the new table becomes the old table, and the previous old table is dropped/cleared since all of its entries should have expired. Lookups will require checking both the new table and the old table. However, I'm not sure if "dropping a table" is any faster than iterating through and deleting expired entries in a prefix database. I suppose it would be faster if the database has an efficient range-delete operation, where the range consists of every entry with the same version number representing either "old" or "new". |
plugin/evm/warp/backend.go
Outdated
@@ -46,17 +46,16 @@ func NewWarpBackend(snowCtx *snow.Context, db database.Database, signatureCacheS | |||
func (w *warpBackend) AddMessage(ctx context.Context, unsignedMessage *teleporter.UnsignedMessage) error { | |||
messageID := hashing.ComputeHash256Array(unsignedMessage.Bytes()) | |||
|
|||
// We generate the signature here and only save the signature in the db and cache. | |||
// It is left to smart contracts built on top of Warp to save messages if required. | |||
// We save the message instead of signature for db, in case for bls key changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if bls keys may change, (when) is it OK to cache the signature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is related to this comment. Signatures should be okay to cache while normally running for adding messages and getting signatures, but when the node goes offline and possibly changes its bls key, the cache would be wiped. Then after restart, the cache gets repopulated with signatures with the updated bls key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the assumption is that the BLS key will only change after a restart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we capture this in a comment in code? otherwise, LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for creating the ticket to add periodic clean up of old messages in the future also.
plugin/evm/warp/backend_test.go
Outdated
sk, err := bls.NewSecretKey() | ||
require.NoError(t, err) | ||
snowCtx.TeleporterSigner = teleporter.NewSigner(sk, sourceChainID) | ||
be := NewWarpBackend(snowCtx, db, 500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we change be
to backend
throughout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with one suggestion to change a variable name in the unit tests
Linking warp backend cleanup as a follow up from this PR. Addressing this comment: #452 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
* - Adding missing import "encoding/json" (used in Stringer-method) (ava-labs#465) - re-ordering of imports * fix allow list comments (ava-labs#469) * fix allow list comments * cleaner sentences * use preallocated big nums * infer allow list role type * add nil checks (ava-labs#473) * add nil checks * add unit test * use non nil config * use non-nil configs --------- Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Warp backend interface and implementation (ava-labs#452) * base warp backend * add signature caching * add docs * error handling * pr fixes * hash unsigned message for key * quick pr fixes and merge * save signature instead of whole msg * use avaGO cache * rename warpBackend and docs * fix nits * Update plugin/evm/warp_backend.go * Update plugin/evm/warp_backend.go * fix pr nits * pr fixes and testing * type check for caching * fix imports * use memdb and remove extra test * remove unused * fix imports * saving message in db and pr fixes * update copyright * update backend variable naming * add comment about saving db vs cache * Add documentation section to PR template (ava-labs#484) * remove current rules (ava-labs#481) Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * add documentation guidelines (ava-labs#486) * add documentation guidelines * fix * parseInt from ENV var (ava-labs#491) * Add generate precompile script to fix CGO flags issue (ava-labs#489) * try to fix lint job (ava-labs#499) * build with avago v1.9.8 (ava-labs#498) * build with avago v1.9.8 * add indirect deps * build fix * another build fix * try to fix lint job * Signature Request Handler (ava-labs#459) Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Update codeowners (ava-labs#492) Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Get signature endpoint: alternative PR with different packaging (ava-labs#507) * base warp backend * add signature caching * add docs * error handling * pr fixes * basic signature request * hash unsigned message for key * implement new Request and RequestHandler interfaces * signature handler impl without constructing one * fix import * quick pr fixes and merge * quick pr fixes and merge * save signature instead of whole msg * use avaGO cache * rename warpBackend and docs * fix nits * Update plugin/evm/warp_backend.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Update plugin/evm/warp_backend.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * fix pr nits * pr fixes and testing * type check for caching * handlers and request before tests * fix imports * signature handler with stats and test * use memdb and remove extra test * remove unused * fix imports * fix imports * nit * update license year * use require noError * saving message in db and pr fixes * create noop signature handler and refactor code handler * get signature endpoint * add api arg to evm client * Update sync/handlers/handler.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * update backend return value * refactor handlers to network handler * change constructor of handler stats * pr cleanups * warp api * initialize warp backend * build fix * wip * warp api follows eth api pattern * cleanup and comments * clean up response * fix warp client return type * nits for get-signature-endpoint (ava-labs#502) Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: Ceyhun Onur <ceyhun.onur@avalabs.org> Co-authored-by: cam-schultz <78878559+cam-schultz@users.noreply.github.com> * resolve merge conflict * warp: Group packages for symmetry w/ sync * more reshuffle * more shuffle * pr comments * fix * update to []byte * update svc return type * rename arg * fix type * add stats pkg --------- Co-authored-by: Matthew Lam <matthew.lam@avalabs.org> Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: cam-schultz <camschultz32@gmail.com> Co-authored-by: Ceyhun Onur <ceyhun.onur@avalabs.org> Co-authored-by: cam-schultz <78878559+cam-schultz@users.noreply.github.com> * Update github actions to ignore rcs (ava-labs#521) * Remove unnecessary gasprice updater logic and tests (ava-labs#514) * Remove unnecessary gasprice updater logic and tests * Remove comment referencing gas price updater * Revert default price limit change * Move set min fee back to vm.go * Update to retain previous tx pool gas price setting behavior * Bump avalanchego to v1.9.9-rc.4 (ava-labs#526) * Bump avalanchego to v1.9.9-rc.4 * Fix imports * Bump compatibility.json for latest avalanchego release * Bump AvalancheGo dep to v1.9.9 (ava-labs#530) * Update AvalancheGo compatibility (ava-labs#531) * Start v0.4.10 release cycle (ava-labs#533) * Stateful Precompile Improvements (ava-labs#389) * move inline string ABIs to separate files and embed them (ava-labs#383) * move inline string ABIs to separate files and embed them * fix tests * fix tests * unexport function * Update accounts/abi/bind/bind.go Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * fix func name Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * replace getByKey with getByAddress (ava-labs#395) * rework on panics in precompiles (ava-labs#418) * rework on panics in precompiles * Update precompile/allow_list.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Update precompile/fee_config_manager.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Update precompile/fee_config_manager.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * fix reviews * wrap errors in ConfigurePrecompiles * cleaner errors * Update utils.go * Update miner/worker.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Update core/state_processor.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Precompile Specific Packages (ava-labs#420) * rework on panics in precompiles * Update precompile/allow_list.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Update precompile/fee_config_manager.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Update precompile/fee_config_manager.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * fix reviews * wrap errors in ConfigurePrecompiles * cleaner errors * move reward manager precompile to package (WIP) * rename files * fix abi path * move typecheck * move precompiles to their own packages * refactor precompile template * remove test file * upate comments * rm test files * new allowlist package * Update precompile/utils.go Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Update precompile/nativeminter/contract_native_minter.go Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Update precompile/nativeminter/contract_native_minter.go Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Update precompile/utils.go Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Update precompile/nativeminter/contract_native_minter.go Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * fix nits Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * rename fee manager config struct (ava-labs#427) * rename struct * rename fee config managers to fee managers * fix comments * Generalized upgrades rb (ava-labs#434) * introduce precompiles as registrable modules * add precompile specific contract tests * remove print debug * add unmarshal tests * remove unnecessary func * fix initial disabled value * register all modules in core/evm/contract_stateful * more refactor & test fix * sync template * fix more tests * rename file * add comment * rename * fix linter * use require error contains * remove whitespace * trim mock interface * sort steps * reviews * Update precompile/stateful_precompile_module.go * Update params/precompile_config.go * Update params/precompile_config.go * fix reviews * add new module to configs and group module functions * generalized-upgrades-rb review (ava-labs#474) * keep genesis disabled fix * nits * nits * nit * review fixes * Update precompile/allowlist/allowlist.go * use address in map * fix linter for embedded keys * update err messages * more err update * remove unnecessary function (ava-labs#478) * Start work on breaking cyclic dependency (ava-labs#496) * Update core/state_processor.go * fix reviews * Update precompile/contracts/txallowlist/contract_test.go * Generalized upgrades rb nits0 (ava-labs#512) * Minor improvements * restore readOnly * more updates * Add back readOnly to allow list tests * Precompile improvements merge (ava-labs#513) Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: Ceyhun Onur <ceyhun.onur@avalabs.org> Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> Co-authored-by: cam-schultz <78878559+cam-schultz@users.noreply.github.com> Co-authored-by: Matthew Lam <matthew.lam@avalabs.org> Co-authored-by: cam-schultz <camschultz32@gmail.com> Co-authored-by: omahs <73983677+omahs@users.noreply.github.com> Co-authored-by: Anusha <63559942+anusha-ctrl@users.noreply.github.com> Co-authored-by: Hagen Hübel <hhuebel@itinance.com> Co-authored-by: minghinmatthewlam <matthew.lam@avalabs.org> Fix: typos (ava-labs#428) fix allow list comments (ava-labs#469) fix CGO flags issue (ava-labs#489) fix lint job (ava-labs#499) * review fixes * minor nits * fix precompile generator * fix fee manager config test * remove debug files * Update core/state_processor.go Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * fix comments * restore statedb ordering * fix configure in reward manager * precompiles: adds a regression test for the IsDisabled case in AvalancheRules (ava-labs#515) * Rename configs: alternative (ava-labs#520) * alternative renaming for precompile configs * fixes * update naming * rename to AllowListConfig * simplify * move blackhole check to module registerer (ava-labs#523) * move blackhole check to module registerer * check blackhole first * add unit test * Add test case for registering module outside of reserved range --------- Co-authored-by: Aaron Buchwald <aaron.buchwald56@gmail.com> * precompile: improve test structure (ava-labs#517) * refactor precompile tests * minor improvements * nit * fix merge * rename package * pr comments * rm file * merge AllowListTests * pr comments * explicit BeforeHook * wspace * Mark TestTransactionIndices flaky --------- Co-authored-by: Aaron Buchwald <aaron.buchwald56@gmail.com> * nit improvements (ava-labs#529) * nit improvements * move comments to README * Update cmd/precompilegen/template-readme.md * Rename new config (ava-labs#528) * rename configurator's new config to make config * use new built-in to create new config instance * precompile: just nits (ava-labs#534) * fix e2e comment in readme (ava-labs#540) * fix config template for precompilegen (ava-labs#538) * fix config template for precompilegen * nits * nit * nits * Update compatibility in README for v0.4.10 (ava-labs#542) * Bump minimum golang version to v1.20.1 (ava-labs#548) * Bump minimum golang version to 1.20.1 * Remove debug flag from gh action release * add whitespace (ava-labs#544) Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Set minimum golang version to go1.19.6 (ava-labs#551) * add custom marshaller for GetChainConfigResponse api (ava-labs#546) * add yet another custom marshaller to GetChainConfigResponse api * enforce a json len before allocation * Move chain config wrapper type to params/ and add test * Fix trailing newline * Remove flaky unit test * Add back unit test with JSONeq --------- Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> * Start coreth migration (ava-labs#552) * Start coreth migration * Bump version to v0.4.11 and avalanchego dep * goimports core/blockchain.go * Update compatibility.json * Update compatibility in README * Bump avalanchego dep to v1.9.10 * Start release cycle v0.4.12 (ava-labs#559) * start v0.4.12 release cycle * add setup action to lint ci for proper go version * Bump version in plugin/evm/version.go * Remove go version pin * Revert "Remove go version pin" This reverts commit e651beb. * bump golangci-lint-acion to @V3 * bump golangci lint * try 1.48 * golangci-lint v1.51 and goimports leveldb file * migrate linting changes from coreth * goimports and fix diagram comments * Fix bad goimports changes --------- Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * remove stderr pipelining (ava-labs#562) * Add ChainConfig JSON Unmarshaller (ava-labs#554) * add unmarshaller * add comments * Update codeql to v2 (ava-labs#566) * state modifications as network upgrade (ava-labs#549) Co-authored-by: Aaron Buchwald <aaron.buchwald56@gmail.com> * Fix Dockerfile and add Build Image to GH Actions (ava-labs#561) * Remove specified git version to avoid downgrades breaking docker build * Add build docker image to unit test action * Update to use current branch instead of commit for docker image build * Update docker image script to push image if env var is set * Add new line to build image script * Update scripts/constants.sh Co-authored-by: Sam Batschelet <sam.batschelet@avalabs.org> * Update docker login * Update build image action * Update gh action * checkout for docker image build * Set build image ID in gh action with github.ref_name * Separate CI and release * remove extra copy * Update name * Update variable used in build image id * Update set var * Update dockerhub repo * Revert dockerhub repo name chnge and use Subnet-EVM in tag name --------- Co-authored-by: Sam Batschelet <sam.batschelet@avalabs.org> * add compatibility to readme (ava-labs#568) * Precompile pre post handling (ava-labs#524) * Re-apply warp precompile interface changes * Address nits * Separate predicate storage slot preparation into separate function in statedb * fix lint * improve miner enforcePredicates comment * Add HashSliceToBytes test case for empty slice * Address comments * Address comments WIP * Pre+post handling diff for shared mem precompile * Separate proposer and general precompile predicates * Update ShouldVerifyWithContext to return true iff proposer predicate is specified * Add checkPredicates unit test * Update .gitignore * goimports * update * goimports config * Address PR review comments and improve comments * Fix typo * Address PR comments * Add rules into PrepareAccessList * Only copy bytes in preparePredicates if predicate precompile is active * Address PR comments --------- Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> * Warp preparation (ava-labs#573) * Add warp precompile preparation * Update hash slice packing * Remove unnecessary local var * Add VM type assertion * Enable Warp API by default * convert from int->uint for more clarity (ava-labs#575) * release tickers on shutdown (ava-labs#574) * release tickers on shutdown * move shutdownWg.done to other defer block * simplify functionSignatureRegex (ava-labs#578) * Remove unused imgs from repo (ava-labs#580) * Matches go-ethereum/pull/26912/files (ava-labs#582) * Add ignore unnecessary import to precompile contract template (ava-labs#581) * trie, accounts/abi: nits: adds err checks (ava-labs#583) * Adds a test for PeerTracker (ava-labs#576) * upgrade avalanche go and add logs * update avalanche go version * add gotenv and get env variables from .env file * Revert "add gotenv and get env variables from .env file" This reverts commit 87b0007. * refactoring * upgrade avalanche go to v1.9.14 * use GRPC_ prefix in env variable to inject env variables --------- Co-authored-by: Hagen Hübel <hhuebel@itinance.com> Co-authored-by: Ceyhun Onur <ceyhun.onur@avalabs.org> Co-authored-by: aaronbuchwald <aaron.buchwald56@gmail.com> Co-authored-by: minghinmatthewlam <matthew.lam@avalabs.org> Co-authored-by: Darioush Jalali <darioush.jalali@avalabs.org> Co-authored-by: cam-schultz <78878559+cam-schultz@users.noreply.github.com> Co-authored-by: cam-schultz <camschultz32@gmail.com> Co-authored-by: Patrick O'Grady <prohb125@gmail.com> Co-authored-by: Sam Batschelet <sam.batschelet@avalabs.org>
* update const name * lower min target items * fix lint * add churn multiplier * latest ago * nits * fix e2e * update GetFilter * use correct avalanche version * update AvalancheGo@v1.10.18-rc.17 * update scripts version
Why this should be merged
This PR adds a
WarpBackend
interface which will be used by the warp precompile to add accepted messages, and also get signatures for the signature handler. Issue #444How this works
WarpBackend
interfaceAddMessage
for warp precompile to add messages to the dbGetSignature
for signature handler to request a message signatureWarpMessagesDB
How this was tested