You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most importantly, you need the Laguna Chain to be running to index it in the first place. So, spin up a Laguna node by navigating to the root folder of the Laguna Chain repo, compile the node and run it:
compilation: cargo build --release
running the actual node: ./target/release/laguna-node --tmp --dev
Once you get the chain running, make note of the port it is running on -- by default it uses 9944.
Expose the RPC url (default: localhost:9944) of Laguna chain to public as subsquid archive requires access to it. For this you can download a free tool like ngrok and use it for tunnel forwarding. Once you have ngrok downloaded on your machine, run: ./ngrok http <CHAIN_PORT_NUMBER> (default: 9944). It returns an url like http://<RPC_URL>
Setting up the Archive Locally
Set the RPC endpoint in the archive/docker-compose.yml by replacing the existing endpoint (set to kusama by default in the subsquid-template) with wss://<RPC_URL>. Additionally, if you require support to index wasm-contracts, enable it in the docker-compose.yml (under the gateway key). Run the archive: docker compose -f archive/docker-compose.yml up. The archive exposes the graphql endpoint on the port 8888 by default.
// file: archive/docker-compose.yml
// Some code above
ingest:
depends_on:
- db
restart: on-failure
image: subsquid/substrate-ingest:firesquid
command: [
"-e", "wss://0a11-103-171-117-157.in.ngrok.io",
// NOTE: ^ this is the public url of the local Laguna chain RPC generated by ngrok tunnel forwarding
// Some code in between
gateway:
depends_on:
- db
image: subsquid/substrate-gateway:firesquid
environment:
DATABASE_MAX_CONNECTIONS: 5
RUST_LOG: "actix_web=info,actix_server=info"
command: [
"--database-url", "postgres://postgres:postgres@db:5432/squid-archive",
# "--evm-support" # uncomment for chains with Frontier EVM pallet# (e.g. Moonbeam/Moonriver or Astar/Shiden)"--contracts-support"
// Some code below
Install Dependencies
Requisites: Node.js (16 or later), Docker
Install core dependencies from the root folder: npm install && npm update
Install additional dependencies (needed for wasm-indexing): npm install @subsquid/ink-abi @subsquid/ink-typegen
subsquid/ink-abi: required for reading and accessing the metadata of binary interface of an ink contract
subsquid/ink-typegen: responsible for generating some boilerplate code and interfaces necessary to decode event logs, smart contract functions, and executable (binary).
Setting up the Processor Locally
Once the archive is running, we can set up the processor. First step would be to define the schema of the events you want to index in the schema.graphql file.
Change the archive source url in the processor.ts to point to our local Laguna Chain graphql endpoint, which by default by http://localhost:8888/graphql.
Add all the events you want to index by adding them to the SubstrateBatchProcessor using the method addEvent(). For example, if you want to fetch the Transfer events emitted by pallet-balances (generated as Balances by construct_runtime!() macro).
To support wasm-based ink contract indexing, you first need to generate typegen (this will be needed to decode events / messages of a contract fetched by the substrateprocessor) for the contract abi. For example, if you want to index ERC20 ink contracts, save the ERC20 abi file generated by the compiler at the path src/abi/erc20.json and run: npx squid-ink-typegen --abi src/abi/erc20.json --output src/abi/erc20.ts
All the processor logic concerned with building up the event database goes in the processor.run(new TypeormDatabase(), async ctx => { /* LOGIC GOES HERE*/}). It is the main entry point whenever new bundle of data arrives from the archive.
As the number of events we want to index can keep growing, it makes it easier to read by separating out the filtering logic for each event into its own functions. For example,
// processor.tsprocessor.run(newTypeormDatabase(),asyncctx=>{// this is the main entry point whenever new bundle of data arrives from the archive.lettransfersData=getTransfers(ctx);/* OTHER LOGIC TO UPDATE THE DATABASE */})interfaceTransferEvent{id: stringblockNumber: numbertimestamp: Datefrom: stringto: stringamount: bigint}functiongetTransfers(ctx: Ctx): TransferEvent[]{lettransfers: TransferEvent[]=[]for(letblockofctx.blocks){for(letitemofblock.items){if(item.name=="Currencies.Transfer"){lete=newCurrenciesTransferEvent(ctx,item.event)letrec: {from: Uint8Array,to: Uint8Array,amount: bigint}let[from,to,amount]=e.transferEventData;rec={from, to, amount};transfers.push({id: item.event.id,blockNumber: block.header.height,timestamp: newDate(block.header.timestamp),from: ss58.codec('kusama').encode(rec.from),to: ss58.codec('kusama').encode(rec.to),amount: rec.amount,})}}}returntransfers}
All the logic related to decoding the events / messages fetched from the SubstrateBatchProcessor can be housed in a different file to make the project more modular. For example, the logic associated with decoding Currencies.Transfer is placed in src/types/events.ts
// events.tsexporttypeAccountId=Uint8Array;exporttypeBalance=BigInt;exportclassCurrenciesTransferEvent{privatereadonly_chain: Chainprivatereadonlyevent: Eventconstructor(ctx: EventContext)constructor(ctx: ChainContext,event: Event)constructor(ctx: EventContext,event?: Event){event=event||ctx.eventassert(event.name==='Currencies.Transfer')this._chain=ctx._chainthis.event=event}/** * Transfer succeeded (from, to, value). */gettransferEventData(): [AccountId,AccountId,Balance]{returnthis._chain.decodeEvent(this.event)}}
Spin up a PostgresSQL database docker container which Squid APIs need to store the processed data in. Run:
make up
// or alternatively, docker-compose up -d
Once implementing all the processor logic is done, we can build the code and create a database table by:
build code
npm run build
remove existing migrations
rm -rf db/migrations/*js
create and apply new migration
npx squid-typeorm-migration generate
make migrate
Launch the Project
NOTE: Laguna chain, ngrok tunnel forwarding (to expose rpc to public), and the archive must be already running.