Extract, transform, and load data from the Core Blockchain.
- CORE ETL
core-etl
is a tool designed to extract, transform, and load data from the Core Blockchain. It supports various modules and can be configured to work with different storage backends.
To install core-etl
, you need to have Rust and Cargo installed. You can install Rust using rustup
:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Clone the repository:
git clone https://github.com/core-coin/core-etl.git
Build the binary and use it:
cargo build --release
cd target/release
./core-etl {flags}
Alternatively, use it inside a Docker container:
make init
make sync-local
export
: Export blockchain data to storage.help
: Print help information.
Flag | Description | Environment Variable | Default Value |
---|---|---|---|
-r, --rpc-url <RPC_URL> |
URL of the RPC node that provides the blockchain data. | RPC_URL |
wss://xcbws.coreblockchain.net |
-n, --network <NETWORK> |
Network to sync data from (e.g., mainnet, devin, private). | NETWORK |
Mainnet |
--storage <STORAGE> |
Storage type for saving blockchain data (e.g., sqlite3-storage, xata-storage). | STORAGE |
sqlite3-storage |
-s, --sqlite3-path <SQLITE3_PATH> |
Path to SQLite3 file where the blockchain data is saved. | SQLITE3_PATH |
None |
-x, --xata-db-dsn <XATA_DB_DSN> |
Xata database DSN where the blockchain data is saved. | XATA_DB_DSN |
None |
-t, --tables-prefix <TABLES_PREFIX> |
Prefix for the tables in the database. Useful when running multiple instances. | TABLES_PREFIX |
etl |
-m, --modules <MODULES>... |
Specify which data to store (e.g., blocks, transactions, token_transfers). | MODULES |
blocks,transactions,token_transfers |
--threads <THREADS> |
Number of working threads during the initial sync. | THREADS |
3 |
-h, --help |
Print help information. | None | None |
-V, --version |
Print version information. | None | None |
Flag | Description | Environment Variable | Default Value |
---|---|---|---|
-b, --block <BLOCK> |
Block to start syncing from. | BLOCK |
None |
-w, --watch-tokens <WATCH_TOKENS>... |
Watch token transfers (e.g., cbc20:token_address ). |
WATCH_TOKENS |
None |
-a, --address-filter <ADDRESS_FILTER>... |
Filter transactions by address (e.g., "0x123,0x456"). | ADDRESS_FILTER |
None |
-r, --retention-duration <RETENTION_DURATION> |
Duration to retain data in the database. | RETENTION_DURATION |
0 |
-c, --cleanup-interval <CLEANUP_INTERVAL> |
Interval (in seconds) for cleanup task, removing data older than retention duration. | CLEANUP_INTERVAL |
3600 |
-l, --lazy |
Lazy mode: Do not sync while the node is syncing. Useful for slow-syncing nodes. | LAZY |
None |
The Makefile provides convenient commands for building, running, and managing the project.
Command | Description |
---|---|
make build |
Build the project. |
make clean |
Clean up logs and database files. |
make init |
Initialize the database mount. |
make up |
Start services using Docker Compose. |
make down |
Stop and remove services using Docker Compose. |
make stop |
Stop running containers without removing them. |
make start |
Start existing containers that were stopped. |
make sync-local |
Sync SQLite database with a local node. |
make sync-remote |
Sync SQLite database with a remote node. |
You can build and run the project using Docker.
docker build -t core-etl .
docker run -d --name core-etl -e RPC_URL=https://your.rpc.url -e SQLITE3_PATH=/path/to/your/sqlite3.db core-etl export
Docker Compose can be used to manage multi-container Docker applications.
make init
make sync-local
make init
make sync-remote
You can configure core-etl
using environment variables or command-line flags. Here are some examples:
export NETWORK="mainnet"
export STORAGE="sqlite3-storage"
export SQLITE3_PATH="/path/to/your/sqlite3.db"
export TABLES_PREFIX="etl"
export MODULES="blocks,transactions,token_transfers"
./core-etl -n mainnet --storage sqlite3-storage -s /path/to/your/sqlite3.db -t etl -m blocks,transactions,token_transfers export
To export blockchain data to SQLite3 storage:
./core-etl -s ./sqlite3.db export
Export only transaction data for the Devin network to SQLite3 storage, using 10 parallel threads for faster syncing:
./core-etl -n devin -s ./sqlite3.db -m transactions --threads 10 export
Export transactions and CTN transfers to Postgres with a cleanup interval of 1 hour and retention period of 24 hours:
./core-etl --storage xata-storage -x postgres://user:password@localhost:5432/dbname -m transactions,token_transfers export -w "ctn" -r 86400 -c 3600
Export blocks and transactions using a local node, with the filtered_etl
table prefix. Do not sync data until the node is synced. Also, filter transactions by address cb22as..21
:
./core-etl -s ./sqlite3.db -r https://127.0.0.1:8545 -t filtered_etl export -m blocks,transactions -l -a cb22as..21
Contributions are welcome! Please open an issue or submit a pull request on GitHub.
This project is licensed under the CORE License. See the LICENSE file for details.