Skip to content

Commit

Permalink
Merge branch 'next' into feat/config-generator
Browse files Browse the repository at this point in the history
  • Loading branch information
droserasprout committed Feb 17, 2025
2 parents d51f7e6 + b62b424 commit 8959a40
Show file tree
Hide file tree
Showing 17 changed files with 1,605 additions and 43 deletions.
1 change: 0 additions & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ jobs:
uses: actions/setup-python@v4
with:
python-version: '3.12'
cache: 'pip'

- name: Run install
run: make install
Expand Down
11 changes: 4 additions & 7 deletions docs/0.quickstart-substrate.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "Quickstart"
description: "This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details."
navigation.icon: "stars"
network: "substrate"
---

# Quickstart
Expand Down Expand Up @@ -64,12 +65,9 @@ DipDup will create a Python package `demo_substrate_events` with everything you

```shell [Terminal]
$ dipdup package tree
demo_substrate_events [/home/droserasprout/git/dipdup/src/demo_substrate_events]
demo_substrate_events [.]
├── abi
│ ├── assethub/v1000000.json
│ ├── assethub/v1001002.json
│ ├── ...
│ └── assethub/v9430.json
│ └── assethub/v601.json
├── configs
│ ├── dipdup.compose.yaml
│ ├── dipdup.sqlite.yaml
Expand Down Expand Up @@ -98,8 +96,7 @@ demo_substrate_events [/home/droserasprout/git/dipdup/src/demo_substrate_events]
├── sql
├── types
│ ├── assethub/substrate_events/assets_transferred/__init__.py
│ ├── assethub/substrate_events/assets_transferred/v601.py
│ └── assethub/substrate_events/assets_transferred/v700.py
│ └── assethub/substrate_events/assets_transferred/v601.py
└── py.typed
```

Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/2.astar.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Explorer: [Blockscout](https://astar-zkevm.explorer.startale.com/)

### Astar zKyoto

Explorer: [Blockscout](https://zkyoto.explorer.startale.com/)
Explorer: [Blockscout](https://zkyoto.explorer.startale.com/) (🔴 404)

| datasource | status | URLs |
| -----------------:|:------------ | ----------------------------------------------------- |
Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/23.hokum.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: "Hokum network support"

{{ #include 10.supported-networks/_intro.md }}

Explorer: [Blockscout](https://explorer.hokum.gg/)
Explorer: [Blockscout](https://explorer.hokum.gg/) (🔴 408)

| datasource | status | URLs |
| -----------------:|:------------- | --------------------------- |
Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/24.kakarot.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: "Kakarot network support"

{{ #include 10.supported-networks/_intro.md }}

See step-by-step instructions on how to get started in [this guide](https://docs.kakarot.org/ecosystem/data-indexers/dipdup)
See step-by-step instructions on how to get started in [this guide](https://docs.kakarot.org/starknet/ecosystem/data-indexers/dipdup)

## Kakarot Sepolia

Expand Down
14 changes: 4 additions & 10 deletions docs/10.supported-networks/37.polygon.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,10 @@ Explorer: [Polygonscan](https://polygonscan.com)
| **evm.etherscan** | 🟢 works | `https://api.polygonscan.com/api` |
| **evm.node** | 🟢 works | `https://polygon-mainnet.g.alchemy.com/v2` <br> `wss://polygon-mainnet.g.alchemy.com/v2` |

### Polygon Mumbai

Explorer: [Polygonscan](https://mumbai.polygonscan.com/)

| datasource | status | URLs |
| -----------------:|:------------- | ------------------------------------------------------- |
| **evm.subsquid** | 🤔 not tested | `https://v2.archive.subsquid.io/network/polygon-mumbai` |
| **evm.etherscan** | 🤔 not tested | `https://api-testnet.polygonscan.com/api` |
| **evm.node** | 🤔 not tested | |

### Polygon Amoy Testnet

Explorer: [Polygonscan](https://amoy.polygonscan.com)

| datasource | status | URLs |
| -----------------:|:------------- | ------------------------------------------------------------- |
| **evm.subsquid** | 🤔 not tested | `https://v2.archive.subsquid.io/network/polygon-amoy-testnet` |
Expand Down Expand Up @@ -59,6 +51,8 @@ Explorer: [Polygonscan](https://testnet-zkevm.polygonscan.com/)

### Polygon zkEVM Cardona Testnet

Explorer: [Polygonscan](https://cardona-zkevm.polygonscan.com/)

| datasource | status | URLs |
| -----------------:|:------------- | ---------------------------------------------------------------------- |
| **evm.subsquid** | 🤔 not tested | `https://v2.archive.subsquid.io/network/polygon-zkevm-cardona-testnet` |
Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/43.scale.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ description: "Scale network support"

### Skale Nebula

Explorers: [Blockscout](https://green-giddy-denebola.explorer.mainnet.skalenodes.com/), [Skalescan](https://skalescan.com/)
Explorers: [Blockscout](https://green-giddy-denebola.explorer.mainnet.skalenodes.com/), [Skalescan](https://skalescan.com/) (🔴 408)

| datasource | status | URLs |
| -----------------:|:-------- | ------------------------------------------------------------------------------------------------------------------------ |
Expand Down
12 changes: 6 additions & 6 deletions docs/4.graphql/1.overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "DipDup provides seamless integration with Hasura GraphQL Engine to

# Overview

DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/latest/graphql/core/index.html) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.
DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/2.0/index/) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.

Before starting to do client integration, it's good to know the specifics of Hasura GraphQL protocol implementation and the general state of the GQL ecosystem.

Expand All @@ -18,17 +18,17 @@ By default, Hasura generates three types of queries for each table in your schem
- Aggregation query (can be disabled in config)

All the GQL features such as fragments, variables, aliases, and directives are supported, as well as batching.
Read more in [Hasura docs](https://hasura.io/docs/latest/graphql/core/databases/postgres/queries/index.html).
Read more in [Hasura docs](https://hasura.io/docs/2.0/queries/postgres/index/).

It's important to understand that a GraphQL query is just a [POST request](https://graphql.org/graphql-js/graphql-clients/) with JSON payload, and in some instances, you don't need a complicated library to talk to your backend.

### Pagination

By default, Hasura does not restrict the number of rows returned per request, which could lead to abuses and a heavy load on your server. You can set up limits in the configuration file. See [hasura page](../4.graphql/2.hasura.md?limit-number-of-rows). But then, you will face the need to [paginate](https://hasura.io/docs/latest/graphql/core/databases/postgres/queries/pagination.html) over the items if the response does not fit the limits.
By default, Hasura does not restrict the number of rows returned per request, which could lead to abuses and a heavy load on your server. You can set up limits in the configuration file. See [hasura page](../4.graphql/2.hasura.md?limit-number-of-rows). But then, you will face the need to [paginate](https://hasura.io/docs/2.0/queries/postgres/pagination/) over the items if the response does not fit the limits.

## Subscriptions

From [Hasura documentation](https://hasura.io/docs/latest/graphql/core/databases/postgres/subscriptions/index.html):
From [Hasura documentation](https://hasura.io/docs/2.0/subscriptions/postgres/index/):

Hasura GraphQL engine subscriptions are **live queries**, i.e., a subscription will return the latest result of the query and not necessarily all the individual events leading up to it.

Expand All @@ -52,6 +52,6 @@ Please note, that [subscriptions-transport-ws](https://github.com/apollographql/

The purpose of DipDup is to create indexers, which means you can consistently reproduce the state as long as data sources are accessible. It makes your backend "stateless", meaning tolerant to data loss.

However, you might need to introduce a non-recoverable state and mix indexed and user-generated content in some cases. DipDup allows marking these UGC tables "immune", protecting them from being wiped. In addition to that, you will need to set up [Hasura Auth](https://hasura.io/docs/latest/graphql/core/auth/index.html) and adjust write permissions for the tables (by default, they are read-only).
However, you might need to introduce a non-recoverable state and mix indexed and user-generated content in some cases. DipDup allows marking these UGC tables "immune", protecting them from being wiped. In addition to that, you will need to set up [Hasura Auth](https://hasura.io/docs/2.0/auth/overview/) and adjust write permissions for the tables (by default, they are read-only).

Lastly, you will need to execute GQL mutations to modify the state from the client side. [Read more](https://hasura.io/docs/latest/graphql/core/databases/postgres/mutations/index.html) about how to do that with Hasura.
Lastly, you will need to execute GQL mutations to modify the state from the client side. [Read more](https://hasura.io/docs/2.0/mutations/postgres/index/) about how to do that with Hasura.
6 changes: 3 additions & 3 deletions docs/4.graphql/2.hasura.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "DipDup can connect to any Hasura GraphQL Engine instance and confi

# Hasura GraphQL

DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/latest/graphql/core/index.html) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.
DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/2.0/index/) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.

DipDup can connect to any Hasura instance, cloud or self-hosted, and configure it to expose your data via GraphQL API. All you need is to enable this integration in the config file:

Expand All @@ -15,7 +15,7 @@ hasura:
admin_secret: ${HASURA_SECRET:-changeme}
```
DipDup will generate Hasura metadata based on your DB schema and apply it using [Metadata API](https://hasura.io/docs/latest/graphql/core/api-reference/metadata-api/index.html).
DipDup will generate Hasura metadata based on your DB schema and apply it using [Metadata API](https://hasura.io/docs/2.0/api-reference/metadata-api/index/).
Hasura metadata is all about data representation in GraphQL API. The structure of the database itself is managed solely by DipDup ORM.
Expand Down Expand Up @@ -78,6 +78,6 @@ Remember that "camelcasing" is a separate stage performed after all tables are r

There are some cases where you want to apply custom modifications to the Hasura metadata. For example, assume that your database schema has a view that contains data from the main table, in which case you cannot set a foreign key between them. Then you can place files with a .json extension in the `hasura` directory of your project with the content in Hasura query format, and DipDup will execute them in alphabetical order of file names when the indexing is complete.

The format of the queries can be found in the [Metadata API](https://hasura.io/docs/latest/api-reference/metadata-api/index/) documentation.
The format of the queries can be found in the [Metadata API](https://hasura.io/docs/2.0/api-reference/metadata-api/index/) documentation.

Feature flag `allow_inconsistent_metadata` set in Hasura configuration section allows users to modify the behavior of the requests error handling. By default, this value is `False`.
6 changes: 0 additions & 6 deletions docs/8.examples/2.in-production.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,6 @@ StakeNow.fi gives you a 360° view of your investments and lets you manage your

Mavryk is a DAO-operated financial ecosystem that lets users borrow and earn on their terms while participating in the governance of the platform.

## Vortex

[Homepage](https://app.vortex.network/) (🔴 404)

Vortez is an all-in-one decentralized finance protocol on Tezos blockchain built by Smartlink. Vortex uses DipDup indexer to track AMM swaps, pools, positions, as well as yield farms, and NFT collections.

## Versum

[Homepage](https://versum.xyz/)
Expand Down
2 changes: 1 addition & 1 deletion docs/9.release-notes/2.v8.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: DipDup 8.1 release notes

# Release Notes: 8.1

This release was created during the [ODHack 8.0](https://app.onlydust.com/hackathons/odhack-80) event by the following participants:
This release was created during the [ODHack 8.0](https://app.onlydust.com/osw/odhack-80/overview) event by the following participants:

@bigherc18 contributed support for database migrations using the Aerich tool. This optional integration allows to manage database migrations with the `dipdup schema` commands. See the [Migrations](../1.getting-started/5.database.md#migrations) section to learn to enable and use this integration.

Expand Down
2 changes: 1 addition & 1 deletion scripts/docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,7 @@ def check_links(source: Path, http: bool) -> None:
green_echo(f'{i+1}/{len(http_links)}: checking link `{link}`')
try:
res = subprocess.run(
('curl', '-s', '-L', '-o', '/dev/null', '-w', '%{http_code}', link),
('curl', '-s', '-L', '-o', '/dev/null', '-w', '%{http_code}', '--max-time', '10', link),
check=True,
capture_output=True,
)
Expand Down
Loading

0 comments on commit 8959a40

Please sign in to comment.