Replies: 5 comments
-
Benchmarking Contract operations at the SwingSet VM LevelOne somewhat mature set of tools for benchmarking smart contract operations runs within a SwingSet VM, but not using the cosmos-sdk consensus layer: There's a somewhat extended write-up: |
Beta Was this translation helpful? Give feedback.
-
end-to-end testing with transactions and queriesThe ultimate test is to run an actual blockchain, submit transactions, and run queries. For example, we have a shell script that tests minting a KREAd character: KREAD_ITEM_OFFER=$(mktemp -t kreadItem.XXX)
node ./generate-kread-item-request.mjs > $KREAD_ITEM_OFFER
agops perf satisfaction --from $GOV1ADDR --executeOffer $KREAD_ITEM_OFFER --keyring-backend=test
agd query vstorage data published.wallet.$GOV1ADDR.current -o json >& gov1.out
name=`jq '.value | fromjson | .values[2] | fromjson | .body[1:] | fromjson | .purses[1].balance.value.payload[0][0].name ' gov1.out`
test_val $name \"ephemeral_Ace\" "found KREAd character"
Before KREAd went into production, we generated various levels of load using scripts like that and looked at various metrics. I think we ran them not just in an a3p context, but on an actual multi-validator-node test network in a kubernetes cluster, and we looked at the output using tools such as datadog. @toliaqat is there any more detail to share in this context? |
Beta Was this translation helpful? Give feedback.
-
end-to-end testing APIsThere are a few other tools for submitting transactions and making queries in the @agoric/synthetic-chain package. See also: ... including recent const wdUser1 = await provisionSmartWallet(agoricAddr, {
BLD: 100_000n,
IST: 100_000n,
});
t.log(`provisioning agoric smart wallet for ${agoricAddr}`);
const doOffer = makeDoOffer(wdUser1);
const brands = await vstorageClient.queryData(
'published.agoricNames.brand',
); |
Beta Was this translation helpful? Give feedback.
-
Attn @gibson042 |
Beta Was this translation helpful? Give feedback.
-
Here's a goal from internal discussion. I don't expect that this is self-explanatory, but I'll go ahead and share it...
we should learn how many computrons are spent to do one of those offers, to guess what our 65Mc limit will do to the scheduling. Once we deploy to a multi-node net, we should see how it compares against the actual wallclock time needed by those validators. |
Beta Was this translation helpful? Give feedback.
-
It's required by the checklist:
Any suggestions on how to do it? Are there any available tools?
cc @tgrecojs
Beta Was this translation helpful? Give feedback.
All reactions