diff --git a/pages/docs/ecosystem-roles/relayer/anchor-update.mdx b/pages/docs/ecosystem-roles/relayer/anchor-update.mdx index 569d8a59..35bfc155 100644 --- a/pages/docs/ecosystem-roles/relayer/anchor-update.mdx +++ b/pages/docs/ecosystem-roles/relayer/anchor-update.mdx @@ -54,7 +54,7 @@ An example configuration file for the Goerli network that is configured for gove You will need to update the linked-anchors and contract addresses for the applicable chains. -``` +```sh filename="config file" copy [[evm.goerli.contracts]] contract = "VAnchor" address = "0x03b88eD9Ff9bE84e4baD3F55D67AE5ABA610523C" diff --git a/pages/docs/ecosystem-roles/relayer/data-querying.mdx b/pages/docs/ecosystem-roles/relayer/data-querying.mdx index 10527152..ecdd2392 100644 --- a/pages/docs/ecosystem-roles/relayer/data-querying.mdx +++ b/pages/docs/ecosystem-roles/relayer/data-querying.mdx @@ -33,7 +33,7 @@ An example configuration file for the Goerli network that is configured for gove You will need to update the linked-anchors and contract addresses for the applicable chains. -``` +```sh filename="config file" copy [[evm.goerli.contracts]] contract = "VAnchor" address = "0x03b88eD9Ff9bE84e4baD3F55D67AE5ABA610523C" diff --git a/pages/docs/ecosystem-roles/relayer/private-tx.mdx b/pages/docs/ecosystem-roles/relayer/private-tx.mdx index 52b8665a..1ca1d90a 100644 --- a/pages/docs/ecosystem-roles/relayer/private-tx.mdx +++ b/pages/docs/ecosystem-roles/relayer/private-tx.mdx @@ -53,7 +53,7 @@ An example configuration file for the goerli network and VAnchor contract should You will need to update the linked-anchors and contract addresses for the applicable chains. -``` +```sh filename="config file" copy [[evm.goerli.contracts]] contract = "VAnchor" address = "0x03b88eD9Ff9bE84e4baD3F55D67AE5ABA610523C" diff --git a/pages/docs/ecosystem-roles/relayer/running-relayer/cli-usage.mdx b/pages/docs/ecosystem-roles/relayer/running-relayer/cli-usage.mdx index 369e4683..10bf9222 100644 --- a/pages/docs/ecosystem-roles/relayer/running-relayer/cli-usage.mdx +++ b/pages/docs/ecosystem-roles/relayer/running-relayer/cli-usage.mdx @@ -17,14 +17,14 @@ to the end of the command. The command will vary depending on how you choose to - ``` + ```sh filename="docker run" copy docker run --platform linux/amd64 ghcr.io/webb-tools/relayer:0.5.0-rc1 --help ``` - ``` + ```sh filename="help" copy # If you used the release binary from github ./webb-relayer --help @@ -37,13 +37,13 @@ to the end of the command. The command will vary depending on how you choose to Start the relayer from a config file: -``` +```sh filename="config file" copy webb-relayer -vvv -c ``` USAGE: -``` +```sh filename="usage" copy webb-relayer [FLAGS] [OPTIONS] ``` @@ -55,7 +55,7 @@ The below lists outlines the available flags for your convienance. Prints help information -```sh +```sh filename="help" copy webb-relayer --help ``` @@ -63,7 +63,7 @@ webb-relayer --help Create the Database Store in a temporary directory and will be deleted when the process exits -```sh +```sh filename="tmp" copy webb-relayer --tmp ``` @@ -71,7 +71,7 @@ webb-relayer --tmp Prints relayer version information -```sh +```sh filename="version" copy webb-relayer --version ``` @@ -79,7 +79,7 @@ webb-relayer --version A level of verbosity, and can be used multiple times -```sh +```sh filename="vvvv" copy webb-relayer --vvvv ``` @@ -89,6 +89,6 @@ webb-relayer --vvvv Directory that contains configration files -```sh +```sh filename="config-dir" copy webb-relayer --config dir ./config ``` diff --git a/pages/docs/ecosystem-roles/relayer/running-relayer/cloud-setup.mdx b/pages/docs/ecosystem-roles/relayer/running-relayer/cloud-setup.mdx index 01444370..d18db051 100644 --- a/pages/docs/ecosystem-roles/relayer/running-relayer/cloud-setup.mdx +++ b/pages/docs/ecosystem-roles/relayer/running-relayer/cloud-setup.mdx @@ -27,14 +27,14 @@ Following the instructions below, you will be able to run the relayer as a syste **Update Ubuntu packages** -``` +```sh filename="apt update" copy # Update ubuntu packages sudo apt update && sudo apt upgrade ``` **Update Snap package** -``` +```sh filename="apt install" copy # Update snap packages sudo apt install -y snapd sudo snap install core; sudo snap refresh core @@ -42,7 +42,7 @@ sudo snap install core; sudo snap refresh core **Install dependencies** -``` +```sh filename="apt install" copy # Install dependencies sudo apt install gcc cmake pkg-config libssl-dev git clang libclang-dev sudo apt install build-essential @@ -50,7 +50,7 @@ sudo apt install build-essential **Install Rust** -``` +```sh filename="rust" copy # Install rust curl https://sh.rustup.rs -sSf | sh -s -- -y export PATH=~/.cargo/bin:$PATH @@ -59,14 +59,14 @@ source ~/.cargo/env **Install Certbot** -``` +```sh filename="certbot" copy # Install certbot sudo snap install --classic certbot && sudo ln -s /snap/bin/certbot /usr/bin/certbot ``` **Build Relayer from source** -``` +```sh filename="build" copy # Build from source git clone https://github.com/webb-tools/relayer.git cd relayer && cargo build --release --features cli @@ -78,17 +78,13 @@ cd relayer && cargo build --release --features cli Let's first create a service file for the relayer: -``` -# Create the service file -sudo touch /etc/systemd/system/webb-relayer.service -``` - Next, we will paste the following into the service file, and replace the `` with the user that will be running the relayer: -``` -# This assumes the repo has been cloned in the home directory of the user -# Paste the following into the service file, and replace the : +- This assumes the repo has been cloned in the home directory of the user +- Paste the following into the service file, and replace the ``: +``` +sudo tee /etc/systemd/system/webb-relayer.service > /dev/null << EOF [Unit] Description=WebbRelayer @@ -99,11 +95,12 @@ ExecStart=cargo run --features cli --bin webb-relayer -- -c /home//relayer [Install] WantedBy=multi-user.target +EOF ``` 2. Enable and start the system service: -``` +```sh filename="enable & start" copy sudo systemctl enable webb-relayer && sudo systemctl start webb-relayer ``` @@ -113,7 +110,7 @@ sudo systemctl enable webb-relayer && sudo systemctl start webb-relayer 2. Install nginx if it isn't already on your machine: -``` +```sh filename="nginx" copy sudo apt install nginx ``` @@ -121,7 +118,7 @@ sudo apt install nginx 3. Create nginx site files for your domain: -``` +```sh filename="site files" copy cd /etc/nginx/sites-available && sudo cp default && @@ -131,7 +128,7 @@ sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/ 4. Modify the nginx sites-available file to: -``` +```console filename="default" copy server { listen 80; listen [::]:80; @@ -149,13 +146,13 @@ server { 5. Check the nginx configuration -``` +```sh filename="status nginx" copy sudo nginx -t ``` 6. If no issues exist, restart the nginx service: -``` +```sh filename="restart nginx" copy sudo systemctl restart nginx ``` @@ -163,13 +160,13 @@ sudo systemctl restart nginx 7. Create the self-signed certificate: -``` +```sh filename="certonly" copy sudo certbot certonly --nginx ``` 8. Modify the nginx site file: -``` +```sh filename="site file" copy map $http_upgrade $connection_upgrade { default upgrade; '' close; @@ -207,7 +204,7 @@ server { 9. Check Nginx configuration and restart the service: -``` +```sh filename="restart nginx" copy sudo nginx -t && sudo systemctl restart nginx ``` @@ -215,11 +212,15 @@ sudo nginx -t && sudo systemctl restart nginx Relayers will want to setup monitoring to ensure maximum uptime and automatic restarts when things go awry. -1. `sudo apt install -y monit` +1. Install monit + +```sh filename="install monit" copy +sudo apt install -y monit +``` 2. modify the monitrc file at: `/etc/monit/monitrc` -``` +```sh filename="monitrc" copy set httpd port 2812 and use address localhost allow localhost @@ -243,7 +244,7 @@ check process webb-relayer matching target/release/webb-relayer 3. restart monit and validate: -``` +```sh filename="restart & validate" copy sudo monit reload && sudo monit validate ``` diff --git a/pages/docs/ecosystem-roles/relayer/running-relayer/quick-start.mdx b/pages/docs/ecosystem-roles/relayer/running-relayer/quick-start.mdx index e521e726..68475435 100644 --- a/pages/docs/ecosystem-roles/relayer/running-relayer/quick-start.mdx +++ b/pages/docs/ecosystem-roles/relayer/running-relayer/quick-start.mdx @@ -29,7 +29,7 @@ Before you begin, ensure that you have the following prerequisites: 1. Open your terminal and run the following command to download the latest version of the relayer: -```bash +```sh filename="download latest version" copy curl -fsSL https://git.io/get-webb-relayer.sh | sh ``` @@ -37,7 +37,7 @@ The script will download the relayer binary (or update it if it already exists) Alternatively, if you wish to download a specific version of the relayer, run the following command: -```bash +```sh filename="download specific version" copy curl -fsSL https://git.io/get-webb-relayer.sh | sh -s ``` @@ -48,7 +48,7 @@ will also suggest adding the directory to your `PATH` environment variable. 2. Verify that the relayer was installed successfully by running the following command: -```bash +```sh filename="verify version" copy ~/.webb/webb-relayer --version ``` @@ -71,14 +71,14 @@ To download the example configuration file, run the following command: - ```bash + ```sh filename="download" copy curl -fsSL https://raw.githubusercontent.com/webb-tools/relayer/main/config/example/config.toml -o ~/.config/webb-relayer/config.toml ``` - ```bash + ```sh filename="download" copy curl -fsSL https://raw.githubusercontent.com/webb-tools/relayer/main/config/example/config.toml -o ~/Library/Application\ Support/tools.webb.webb-relayer/config.toml ``` @@ -100,7 +100,7 @@ For example, if you want to override the `port` value in the configuration file, For example, to modify the `port` value using the environment variable, you can do the following: -```bash +```sh filename="export" copy export WEBB_PORT=9955 ``` @@ -110,7 +110,7 @@ For our example configuration file, the following environment variables are requ create a new file called `.env` in the same directory as your current directory and add the following line: -```bash +```sh filename="PRIVATE_KEY" copy # Hex-encoded private for the relayer (64 characters) prefixed with 0x PRIVATE_KEY="0x..." ``` @@ -121,7 +121,7 @@ save the file and exit the editor. To run the relayer, run the following command: -```bash +```sh filename="vvv" copy ~/.webb/webb-relayer -vvv ``` @@ -131,7 +131,7 @@ To run the relayer, run the following command: You should see the following output: -```rust +```rust filename="output" 2023-03-14T14:13:08.315804Z DEBUG webb_relayer_config::cli: Getting default dirs for webb relayer at crates/relayer-config/src/cli.rs:61 @@ -177,14 +177,14 @@ To verify that the relayer is running, you can use the `/api/v1/ip` endpoint: - ```bash + ```sh filename="local" copy curl http://localhost:9955/api/v1/ip ``` - ```bash + ```sh filename="server" copy curl http://:9955/api/v1/ip ``` @@ -193,7 +193,7 @@ To verify that the relayer is running, you can use the `/api/v1/ip` endpoint: You should see the following output: -```json +```json filename="public-ip" copy {"ip":""} ``` diff --git a/pages/docs/ecosystem-roles/relayer/running-relayer/relayer-api.mdx b/pages/docs/ecosystem-roles/relayer/running-relayer/relayer-api.mdx index 47059cf6..2c71389c 100644 --- a/pages/docs/ecosystem-roles/relayer/running-relayer/relayer-api.mdx +++ b/pages/docs/ecosystem-roles/relayer/running-relayer/relayer-api.mdx @@ -9,13 +9,13 @@ The relayer has 3 endpoints available to query from. They are outlined below for ## **Retrieving nodes IP address:** -``` +```sh filename="Retrieving IP address" copy /api/v1/ip ``` **Expected Response:** -```json +```json filename="respose" { "ip": "127.0.0.1" } @@ -23,13 +23,13 @@ The relayer has 3 endpoints available to query from. They are outlined below for ## **Retrieve relayer configuration** -``` +```sh filename="Retrieving info" copy /api/v1/info ``` **Expected Response:** -```json +```json filename="respose" { "evm": { "rinkeby": { @@ -68,7 +68,7 @@ The relayer has 3 endpoints available to query from. They are outlined below for ##### For evm -``` +```sh filename="evm" copy /api/v1/leaves/evm/4/0x626fec5ffa7bf1ee8ced7dabde545630473e3abb ``` @@ -76,13 +76,13 @@ The relayer has 3 endpoints available to query from. They are outlined below for > Note: Since substrate dosent have contract address we use `tree_id` -``` +```sh filename="tree_id" copy /api/v1/leaves/substrate/4/9 ``` **Expected Response:** -```json +```json filename="respose" { "leaves": ["0x2e5c62af48845c095bfa9b90b8ec9f6b7bd98fb3ac2dd3039050a64b919951dd", "0x0f89f0ef52120b8db99f5bdbbdd4019b5ea4bcfef14b0c19d261268da8afdc24", "0x3007c62f678a503e568534487bc5b0bc651f37bbe1f34668b4c8a360f15ba3c3"], "lastQueriedBlock": "0x9f30a8" @@ -91,13 +91,13 @@ The relayer has 3 endpoints available to query from. They are outlined below for ## **Retrieve Metrics information** -``` +```sh filename="metrics" copy /api/v1/metrics ``` **Expected Response:** -```json +```json filename="respose" { "metrics": "# HELP bridge_watcher_back_off_metric specifies how many times the bridge watcher backed off\n# TYPE bridge_watcher_back_off_metric counter\nbridge_watcher_back_off_metric 0\n# HELP gas_spent_metric The total number of gas spent\n# TYPE gas_spent_metric counter\ngas_spent_metric 0\n# HELP handle_proposal_execution_metric How many times did the function handle_proposal get executed\n# TYPE handle_proposal_execution_metric counter\nhandle_proposal_execution_metric 0\n# HELP proposal_queue_attempt_metric How many times a proposal is attempted to be queued\n# TYPE proposal_queue_attempt_metric counter\nproposal_queue_attempt_metric 0\n# HELP total_active_relayer_metric The total number of active relayers\n# TYPE total_active_relayer_metric counter\ntotal_active_relayer_metric 0\n# HELP total_fee_earned_metric The total number of fees earned\n# TYPE total_fee_earned_metric counter\ntotal_fee_earned_metric 0\n# HELP total_number_of_data_stored_metric The Total number of data stored\n# TYPE total_number_of_data_stored_metric counter\ntotal_number_of_data_stored_metric 1572864\n# HELP total_number_of_proposals_metric The total number of proposals proposed\n# TYPE total_number_of_proposals_metric counter\ntotal_number_of_proposals_metric 0\n# HELP total_transaction_made_metric The total number of transaction made\n# TYPE total_transaction_made_metric counter\ntotal_transaction_made_metric 0\n# HELP transaction_queue_back_off_metric How many times the transaction queue backed off\n# TYPE transaction_queue_back_off_metric counter\ntransaction_queue_back_off_metric 0\n" } diff --git a/pages/docs/ecosystem-roles/relayer/running-relayer/running-docker.mdx b/pages/docs/ecosystem-roles/relayer/running-relayer/running-docker.mdx index cb9ff30d..bb8c5321 100644 --- a/pages/docs/ecosystem-roles/relayer/running-relayer/running-docker.mdx +++ b/pages/docs/ecosystem-roles/relayer/running-relayer/running-docker.mdx @@ -21,7 +21,7 @@ fulfill the requirements for listing your relayer on `app.webb.tools`. Before we begin we want to `ssh` into the VM and update the system using the specified system package manager: -``` +```sh filename="apt update" copy # Update packages sudo apt update && sudo apt upgrade ``` @@ -39,7 +39,7 @@ to the Relayer Overview page [here](). Let's create a new directory called `webb-relayer`. This is where we will store all our configuration files, and secrets. -``` +```sh filename="mkdir" copy mkdir -p ~/webb-relayer/{config,data,secrets} ``` @@ -51,14 +51,14 @@ signature relaying page [here](/docs/ecosystem-roles/relayer/anchor-update/). We want to create a `toml` file to outline our configuration details: -``` +```sh filename="nano" copy # main.toml nano ~/webb-relayer/config/main.toml ``` Let's update to include the required fields for data querying: -``` +```toml filename="main.toml" copy # Webb Relayer Network Port # default: 9955 port = 9955 @@ -72,7 +72,7 @@ private-tx-relay = false For this example, we will use ETH Goerli Testnet (`chain_id = 5`). Create a file for the configuration related to this chain: -```bash +```sh filename="nano" copy # goerli.toml nano ~/webb-relayer/config/goerli.toml ``` @@ -80,7 +80,7 @@ nano ~/webb-relayer/config/goerli.toml Next we want to add the required fields to query the data from the Anchor deployed on Goerli Testnet. For an exhasutive list of configuration please refer to the [Configuration Options]() doc. Let's add the following to the file: -```toml +```toml filename="goerli.toml" copy # Block which represents properties for a network [evm.goerli] name = "goerli" @@ -131,13 +131,13 @@ events-watcher = { enabled = true, polling-interval = 15000 } As you may have noticed, there are a few environment variables inside the configuration file, and we will have to supply them. To do so, lets create a `.env` file with these values: -```bash +```sh filename="nano" copy nano ~/webb-relayer/.env ``` Add the following fields: -```bash +```sh filename=".env" copy # The internal Webb Relayer Port # this will not be the public port, but will be used internally # inside docker. @@ -169,13 +169,13 @@ run the reverse proxy and the Webb relayer. The section will cover setting up the required docker service in a docker compose file. Let's start by creating a docker-compose file: -```bash +```sh filename="nano" copy nano ~/webb-relayer/docker-compose.yml ``` Add the following lines: -```yaml +```yaml filename="docker-compose.yml" copy version: "3" services: @@ -227,13 +227,13 @@ volumes: This guide makes use of [Caddy](https://caddyserver.com/) as a reverse proxy. Caddy is a powerful reverse proxy that makes it incredibly easy to setup with only a few lines. Let's take a look at the configuration: -```bash +```sh filename="nano" copy nano ~/webb-relayer/config/Caddyfile ``` Add the following lines: -```bash +```sh filename="Caddyfile" copy { # Remove the below line to disable debug logging, could be helpful # but noisy. @@ -263,7 +263,7 @@ We have now successfully setup the reverse proxy, and we are ready to run the re Go to `~/webb-relayer` and then fire-up the following command: -```bash +```sh filename="compose up" copy cd ~/webb-relayer # Then run docker docker compose up # add -d if you want to run it in the backgroud. @@ -276,7 +276,7 @@ endpoint to view the configuration we outlined above. Let's make sure we have successfully setup a data-querying relayer. To do so, we will query the relayer's endpoint: -```bash +```sh filename="test" # Replace this with your domain name, that you added inside the .env file # if running locally, you should just go assume the DOMAIN is localhost:9955 https://$DOMAIN/api/v1/info @@ -284,7 +284,7 @@ https://$DOMAIN/api/v1/info If everything is working correctly, you should see a response similar to this: -```bash +```sh filename="response" { evm: { "5": { diff --git a/pages/docs/ecosystem-roles/validator/api-reference/cli.mdx b/pages/docs/ecosystem-roles/validator/api-reference/cli.mdx index a142bca2..27bb4151 100644 --- a/pages/docs/ecosystem-roles/validator/api-reference/cli.mdx +++ b/pages/docs/ecosystem-roles/validator/api-reference/cli.mdx @@ -19,7 +19,7 @@ added to the end of the command. The command will vary depending on how you choo - ``` + ```sh filename="help" copy docker run --platform linux/amd64 --network="host" -v "/var/lib/data" --entrypoint ./tangle-standalone \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ --help @@ -28,7 +28,7 @@ added to the end of the command. The command will vary depending on how you choo - ``` + ```sh filename="help" copy # If you used the release binary ./tangle-standalone --help @@ -43,13 +43,13 @@ If you have compiled the tangle-parachain binary its important to note that the provided first will be passed to the parachain node, while the arguments provided after `--` will be passed to the relay chain node. -``` +```sh filename="args" copy tangle-parachain -- ``` USAGE: -``` +```sh filename="usage" copy tangle-parachain [OPTIONS] [-- ...] tangle-parachain ``` @@ -63,7 +63,7 @@ The below lists the most commonly used flags for your convienance. Shortcut for `--name Alice --validator` with session keys for `Alice` added to keystore. Commonly used for development or local test networks. -```sh +```sh filename="alice" copy tangle-standalone --alice ``` @@ -77,7 +77,7 @@ blocks (i.e a number). NOTE: only finalized blocks are subject for removal! -```sh +```sh filename="blocks-pruning" copy tangle-standalone --blocks-pruning 120 ``` @@ -86,7 +86,7 @@ tangle-standalone --blocks-pruning 120 Shortcut for `--name Bob --validator` with session keys for `Bob` added to keystore. Commonly used for development or local test networks. -```sh +```sh filename="bob" copy tangle-standalone --bob ``` @@ -94,7 +94,7 @@ tangle-standalone --bob Specify a list of bootnodes. -```sh +```sh filename="bootnodes" copy tangle-standalone --bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWAWueKNxuNwMbAtss3nDTQhMg4gG3XQBnWdQdu2DuEsZS ``` @@ -105,7 +105,7 @@ Specify the chain specification. It can be one of the predefined ones (dev, local, or staging) or it can be a path to a file with the chainspec (such as one exported by the `build-spec` subcommand). -```sh +```sh filename="local" copy tangle-standalone --chain standalone-local ``` @@ -114,7 +114,7 @@ tangle-standalone --chain standalone-local Shortcut for `--name Charlie --validator` with session keys for `Charlie` added to keystore. Commonly used for development or local test networks. -```sh +```sh filename="charlie" copy tangle-standalone --charlie ``` @@ -124,7 +124,7 @@ Run node as collator. (Not applicable at this time.) Note that this is the same as running with `--validator`. -```sh +```sh filename="collator" copy tangle-standalone --collator ``` @@ -132,7 +132,7 @@ tangle-standalone --collator Specify custom base path. -```sh +```sh filename="base path" copy tangle-standalone --base-path /data ``` @@ -140,7 +140,7 @@ tangle-standalone --base-path /data Limit the memory the database cache can use -```sh +```sh filename="db-cache" copy tangle-standalone --db-cache 128 ``` @@ -153,7 +153,7 @@ This includes displaying the log target, log level and thread name. This is automatically enabled when something is logged with any higher level than `info`. -```sh +```sh filename="log-output" copy tangle-standalone --detailed-log-output ``` @@ -164,7 +164,7 @@ Specify the development chain. This flag sets `--chain=dev`, `--force-authoring`, `--rpc-cors=all`, `--alice`, and `--tmp` flags, unless explicitly overridden. -```sh +```sh filename="dev" copy tangle-standalone --dev ``` @@ -179,7 +179,7 @@ The execution strategy that should be used by all execution contexts `both` - execute with both native and Wasm builds `nativeelsewasm` - execute with the native build if possible and if it fails, then execute with Wasm -```sh +```sh filename="wasm" copy tangle-standalone --execution wasm ``` @@ -187,7 +187,7 @@ tangle-standalone --execution wasm Enable authoring even when offline -```sh +```sh filename="authoring" copy tangle-parachain --force-authoring ``` @@ -195,7 +195,7 @@ tangle-parachain --force-authoring Specify custom keystore path -```sh +```sh filename="keystore path" copy tangle-standalone --keystore-path /tmp/chain/data/ ``` @@ -203,7 +203,7 @@ tangle-standalone --keystore-path /tmp/chain/data/ Specify custom URIs to connect to for keystore-services -```sh +```sh filename="keystore url" copy tangle-standalone --keystore-uri foo://example.com:8042/over/ ``` @@ -213,7 +213,7 @@ The human-readable name for this node. The node name will be reported to the telemetry server, if enabled. -```sh +```sh filename="name" copy tangle-standalone --name zeus ``` @@ -233,7 +233,7 @@ WARNING: Secrets provided as command-line arguments are easily exposed. Use of t option should be limited to development and testing. To use an externally managed secret key, use `--node-key-file` instead. -```sh +```sh filename="node-key" copy tangle-standalone --node-key b6806626f5e4490c27a4ccffed4fed513539b6a455b14b32f58878cf7c5c4e68 ``` @@ -249,7 +249,7 @@ follows: If the file does not exist, it is created with a newly generated secret key of the chosen type. -```sh +```sh filename="node-key-file" copy tangle-standalone --node-key-file ./node-keys-file/ ``` @@ -257,7 +257,7 @@ tangle-standalone --node-key-file ./node-keys-file/ Specify p2p protocol TCP port -```sh +```sh filename="port" copy tangle-standalone --port 9944 ``` @@ -267,7 +267,7 @@ Expose Prometheus exporter on all interfaces. Default is local. -```sh +```sh filename="prometheus" copy tangle-standalone --prometheus-external ``` @@ -275,7 +275,7 @@ tangle-standalone --prometheus-external Specify Prometheus exporter TCP Port -```sh +```sh filename="prometheus-port" copy tangle-standalone --prometheus-port 9090 ``` @@ -287,7 +287,7 @@ A comma-separated list of origins (protocol://domain or special `null` value). V `all` will disable origin validation. Default is to allow localhost and https://polkadot.js.org origins. When running in --dev mode the default is to allow all origins. -```sh +```sh filename="rpc-cors" copy tangle-standalone --rpc-cors "*" ``` @@ -300,7 +300,7 @@ proxy server to filter out dangerous methods. More details: https://docs.substrate.io/main-docs/build/custom-rpc/#public-rpcs. Use `--unsafe-rpc-external` to suppress the warning if you understand the risks. -```sh +```sh filename="rpc-external" copy tangle-standalone --rpc-external ``` @@ -308,7 +308,7 @@ tangle-standalone --rpc-external Specify HTTP RPC server TCP port -```sh +```sh filename="rpc-port" copy tangle-standalone --rpc-port 9933 ``` @@ -320,7 +320,7 @@ Default is to keep only the last 256 blocks, otherwise, the state can be kept fo the blocks (i.e 'archive'), or for all of the canonical blocks (i.e 'archive-canonical'). -```sh +```sh filename="state-pruning" copy tangle-standalone --state-pruning 128 ``` @@ -332,7 +332,7 @@ This flag can be passed multiple times as a means to specify multiple telemetry endpoints. Verbosity levels range from 0-9, with 0 denoting the least verbosity. Expected format is 'URL VERBOSITY'. -```sh +```sh filename="wss" copy tangle-standalone --telemetry-url 'wss://foo/bar 0' ``` @@ -343,7 +343,7 @@ Enable validator mode. The node will be started with the authority role and actively participate in any consensus task that it can (e.g. depending on availability of local keys). -```sh +```sh filename="validator" copy tangle-standalone --validator ``` @@ -357,7 +357,7 @@ Method for executing Wasm runtime code `compiled` - this is the default and uses the Wasmtime compiled runtime `interpreted-i-know-what-i-do` - uses the wasmi interpreter -```sh +```sh filename="wasm-execution" copy tangle-standalone --wasm-execution compiled ``` @@ -370,7 +370,7 @@ proxy server to filter out dangerous methods. More details: https://docs.substrate.io/main-docs/build/custom-rpc/#public-rpcs. Use `--unsafe-ws-external` to suppress the warning if you understand the risks. -```sh +```sh filename="ws-external" copy tangle-standalone --ws-external ``` @@ -378,7 +378,7 @@ tangle-standalone --ws-external Specify WebSockets RPC server TCP port -```sh +```sh filename="ws-port" copy tangle-standalone --ws-port 9944 ``` @@ -388,7 +388,7 @@ The following subcommands are available: USAGE: -``` +```sh filename="subcommand" copy tangle-standalone ``` diff --git a/pages/docs/ecosystem-roles/validator/deploy-with-docker/full-node.mdx b/pages/docs/ecosystem-roles/validator/deploy-with-docker/full-node.mdx index b1effc3b..e3886ca6 100644 --- a/pages/docs/ecosystem-roles/validator/deploy-with-docker/full-node.mdx +++ b/pages/docs/ecosystem-roles/validator/deploy-with-docker/full-node.mdx @@ -18,7 +18,7 @@ set their keys, fetch the applicable chainspec and run the start command to get ### **1. Pull the Tangle Docker image:** -``` +```sh filename="pull" copy # Only use "main" if you know what you are doing, it will use the latest and maybe unstable version of the node. docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main @@ -28,7 +28,7 @@ docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main Let us create a directory where we will store all the data for our node. This includes the chain data, and logs. -``` +```sh filename="mkdir" copy mkdir /var/lib/tangle/ ``` @@ -37,7 +37,7 @@ mkdir /var/lib/tangle/ To join the Tangle Test network, we need to fetch the appropriate chainspec for the Tangle network. Download the latest chainspec for standalone testnet: -``` +```sh filename="get chainspec" copy # Fetches chainspec for Tangle network wget https://github.com/webb-tools/tangle/blob/main/chainspecs/tangle-standalone.json ``` @@ -50,7 +50,7 @@ Please make a reference where you have stored this `json` file as we will need i To start the node run the following command: -``` +```sh filename="docker run" copy docker run --rm -it -v /var/lib/tangle/:/data ghcr.io/webb-tools/tangle/tangle-standalone:main \ --chain tangle-testnet \ --name="YOUR-NODE-NAME" \ @@ -79,7 +79,7 @@ The upgrade process is straightforward and is the same for a full node. 1. Stop the docker container: -``` +```sh filename="docker stop" copy sudo docker stop `CONTAINER_ID` ``` @@ -96,7 +96,7 @@ If you need a fresh instance of your Tangle node, you can purge your node by rem You'll first need to stop the Docker container: -``` +```sh filename="docker stop" copy sudo docker stop `CONTAINER_ID` ``` @@ -104,14 +104,14 @@ If you did not use the `-v` flag to specify a local directory for storing your c If you did spin up your node with the `-v` flag, you will need to purge the specified directory. For example, for the suggested data directly, you can run the following command to purge your parachain node data: -``` +```sh filename="rm" copy # purges standalone data sudo rm -rf /data/chains/* ``` If you ran with parachain node you can run the following command to purge your relay-chain node data: -``` +```sh filename="rm" copy # purges relay chain data sudo rm -rf /data/polkadot/* ``` diff --git a/pages/docs/ecosystem-roles/validator/deploy-with-docker/relayer-node.mdx b/pages/docs/ecosystem-roles/validator/deploy-with-docker/relayer-node.mdx index 8d44042d..3aade0b0 100644 --- a/pages/docs/ecosystem-roles/validator/deploy-with-docker/relayer-node.mdx +++ b/pages/docs/ecosystem-roles/validator/deploy-with-docker/relayer-node.mdx @@ -29,7 +29,7 @@ command to get up and running. We will used the pre-built Tangle Docker image to generate and insert the required keys for our node. -``` +```sh filename="pull" copy # Only use "main" if you know what you are doing, it will use the latest and maybe unstable version of the node. docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main @@ -39,7 +39,7 @@ docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main Let us create a directory where we will store all the data for our node. This includes the chain data, keys, and logs. -``` +```sh filename="mkdir" copy mkdir /var/lib/tangle/ ``` @@ -58,7 +58,7 @@ should paste your SURI when the command asks for it. **Account Keys** -``` +```sh filename="Acco" copy # it will ask for your suri, enter it. docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ @@ -70,7 +70,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **Aura Keys** -``` +```sh filename="Aura" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ key insert --base-path /var/lib/tangle/ \ @@ -81,7 +81,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **Im-online Keys** - **these keys are optional** -``` +```sh filename="Imonline" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ key insert --base-path /var/lib/tangle/ \ @@ -92,7 +92,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **DKG Keys** -``` +```sh filename="DKG" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ tangle-standalone key insert --base-path /data \ @@ -103,7 +103,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **Grandpa Keys** -``` +```sh filename="Grandpa" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ tangle-standalone key insert --base-path /data \ @@ -114,7 +114,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ To ensure you have successfully generated the keys correctly run: -``` +```sh filename="ls" copy ls ~/webb/tangle/chains/*/keystore # You should see a some file(s) there, these are the keys. ``` @@ -126,13 +126,13 @@ in the [Tangle repo](/docs/ecosystem-roles/validator/deploy-with-docker/relayer- Let's start by creating a docker-compose file: -```bash +```sh filename="nano" copy nano ~/webb/tangle/docker-compose.yml ``` Add the following lines: -```yaml +```yaml filename="docker-compose.yml" copy # This an example of a docker compose file which contains both Relayer and Tangle Node. version: "3" @@ -203,7 +203,7 @@ volumes: Prior to spinning up the Docker containers, we need to set some environment variables. Below displays an example `.env` file but you will need to update to reflect your own environment. -```bash +```sh filename="export variables" copy export TANGLE_RELEASE_VERSION=main export RELAYER_RELEASE_VERSION=0.5.0-rc1 export BASE_PATH=/tmp/data/ @@ -215,7 +215,7 @@ export WEBB_PORT=9955 With our keys generated and our docker-compose file created, we can now start the relayer and validator node. -```bash +```sh filename="compose up" copy docker compose up -d ``` @@ -229,7 +229,7 @@ The upgrade process is straightforward and is the same for a full node or valida 1. Stop the docker container: -``` +```sh filename="docker stop" copy sudo docker stop `CONTAINER_ID` ``` @@ -245,7 +245,7 @@ If you need a fresh instance of your Tangle node, you can purge your node by rem You'll first need to stop the Docker container: -``` +```sh filename="docker stop" copy sudo docker stop `CONTAINER_ID` ``` @@ -253,14 +253,14 @@ If you did not use the `-v` flag to specify a local directory for storing your c If you did spin up your node with the `-v` flag, you will need to purge the specified directory. For example, for the suggested data directly, you can run the following command to purge your parachain node data: -``` +```sh filename="rm" copy # purges standalone data sudo rm -rf /data/chains/* ``` If you ran with parachain node you can run the following command to purge your relay-chain node data: -``` +```sh filename="rm" copy # purges relay chain data sudo rm -rf /data/polkadot/* ``` diff --git a/pages/docs/ecosystem-roles/validator/deploy-with-docker/validator-node.mdx b/pages/docs/ecosystem-roles/validator/deploy-with-docker/validator-node.mdx index 76ee6544..18281798 100644 --- a/pages/docs/ecosystem-roles/validator/deploy-with-docker/validator-node.mdx +++ b/pages/docs/ecosystem-roles/validator/deploy-with-docker/validator-node.mdx @@ -17,7 +17,7 @@ please visit the official Docker [docs](https://docs.docker.com/get-docker/). Wh Although we can make use of the provided `docker-compose` file in the [Tangle repo](https://github.com/webb-tools/tangle/tree/main/docker/tangle-standalone), we pull the `tangle-standalone:main` Docker image from ghcr.io so that we can generate and insert our required keys before starting the node. -``` +```sh filename="pull" copy # Only use "main" if you know what you are doing, it will use the latest and maybe unstable version of the node. docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main @@ -27,7 +27,7 @@ docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main Let us create a directory where we will store all the data for our node. This includes the chain data, keys, and logs. -``` +```sh filename="mkdir" copy mkdir /var/lib/tangle/ ``` @@ -36,7 +36,7 @@ mkdir /var/lib/tangle/ To join the Tangle Test network as node operator we need to fetch the appropriate chainspec for the Tangle network. Download the latest chainspec for standalone testnet: -``` +```sh filename="get chainspec" copy # Fetches chainspec for Tangle network wget https://github.com/webb-tools/tangle/blob/main/chainspecs/standalone/tangle-standalone.json ``` @@ -59,7 +59,7 @@ should paste your SURI when the command asks for it. **Account Keys** -``` +```sh filename="Acco" copy # it will ask for your suri, enter it. docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ @@ -71,7 +71,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **Aura Keys** -``` +```sh filename="Aura" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ key insert --base-path /var/lib/tangle/ \ @@ -82,7 +82,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **Im-online Keys** - **these keys are optional (required if you are running as a validator)** -``` +```sh filename="Imonline" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ key insert --base-path /var/lib/tangle/ \ @@ -93,7 +93,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **DKG Keys** -``` +```sh filename="DKG" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ tangle-standalone key insert --base-path /data \ @@ -104,7 +104,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ **Grandpa Keys** -``` +```sh filename="Grandpa" copy docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ tangle-standalone key insert --base-path /data \ @@ -115,7 +115,7 @@ ghcr.io/webb-tools/tangle/tangle-standalone:main \ To ensure you have successfully generated the keys correctly run: -``` +```sh filename="ls" copy ls ~/webb/tangle/chains/*/keystore # You should see a some file(s) there, these are the keys. ``` @@ -127,7 +127,7 @@ if you want the node to auto generate the keys, add the `--auto-insert-keys` fla To start the node run the following command: -``` +```sh filename="docker run" copy docker run --platform linux/amd64 --network="host" -v "/var/lib/data" --entrypoint ./tangle-standalone \ ghcr.io/webb-tools/tangle/tangle-standalone:main \ --base-path=/data \ @@ -150,7 +150,7 @@ such as the chain specification, node name, role, genesis state, and more. If you followed the installation instructions for Tangle, once synced, you will be connected to peers and see blocks being produced on the Tangle network! -``` +```sh filename="logs" 2023-03-22 14:55:51 Tangle Standalone Node 2023-03-22 14:55:51 ✌️ version 0.1.15-54624e3-aarch64-macos 2023-03-22 14:55:51 ❤️ by Webb Technologies Inc., 2017-2023 @@ -185,14 +185,14 @@ blocks being produced on the Tangle network! The docker-compose file will spin up a container running Tangle standalone node, but you have to set the following environment variables. Remember to customize your the values depending on your environment and then copy paste this to CLI. -``` +```sh filename="set variables" copy RELEASE_VERSION=main CHAINSPEC_PATH=/tmp/chainspec/ ``` After that run: -``` +```sh filename="compose up" copy docker compose up -d ``` @@ -204,7 +204,7 @@ The upgrade process is straightforward and is the same for a full node. 1. Stop the docker container: -``` +```sh filename="docker stop" copy sudo docker stop `CONTAINER_ID` ``` @@ -223,7 +223,7 @@ If you need a fresh instance of your Tangle node, you can purge your node by rem You'll first need to stop the Docker container: -``` +```sh filename="docker stop" copy sudo docker stop `CONTAINER_ID` ``` @@ -231,7 +231,7 @@ If you did not use the `-v` flag to specify a local directory for storing your c If you did spin up your node with the `-v` flag, you will need to purge the specified directory. For example, for the suggested data directly, you can run the following command to purge your standalone node data: -``` +```sh filename="rm" copy # purges standalone data sudo rm -rf /data/chains/* ``` diff --git a/pages/docs/ecosystem-roles/validator/monitoring/alert-manager.mdx b/pages/docs/ecosystem-roles/validator/monitoring/alert-manager.mdx index ed72a3cd..a6b4664a 100644 --- a/pages/docs/ecosystem-roles/validator/monitoring/alert-manager.mdx +++ b/pages/docs/ecosystem-roles/validator/monitoring/alert-manager.mdx @@ -37,11 +37,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.darwin-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.darwin-arm64.tar.gz ``` @@ -49,11 +49,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.linux-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.linux-arm64.tar.gz && ``` @@ -63,11 +63,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.windows-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.windows-arm64.tar.gz ``` @@ -78,7 +78,7 @@ Let's first start by downloading the latest releases of the above mentioned modu Run the following command: -``` +```sh filename="tar" copy tar xvf alertmanager-*.tar.gz ``` @@ -90,7 +90,7 @@ tar xvf alertmanager-*.tar.gz Copy the `alertmanager` binary and `amtool`: -``` +```sh filename="cp" copy sudo cp ./alertmanager-*.linux-amd64/alertmanager /usr/local/bin/ && sudo cp ./alertmanager-*.linux-amd64/amtool /usr/local/bin/ ``` @@ -99,13 +99,13 @@ sudo cp ./alertmanager-*.linux-amd64/amtool /usr/local/bin/ Now we want to create dedicated users for the Alertmanager module we have installed: -``` +```sh filename="useradd" copy sudo useradd --no-create-home --shell /usr/sbin/nologin alertmanager ``` **5. Create Directories for `Alertmanager`:** -``` +```sh filename="mkdir" copy sudo mkdir /etc/alertmanager && sudo mkdir /var/lib/alertmanager ``` @@ -116,7 +116,7 @@ We need to give our user permissions to access these directories: **alertManager**: -``` +```sh filename="chown" copy sudo chown alertmanager:alertmanager /etc/alertmanager/ -R && sudo chown alertmanager:alertmanager /var/lib/alertmanager/ -R && sudo chown alertmanager:alertmanager /usr/local/bin/alertmanager && @@ -125,7 +125,7 @@ sudo chown alertmanager:alertmanager /usr/local/bin/amtool **7. Finally, let's clean up these directories:** -``` +```sh filename="rm" copy rm -rf ./alertmanager* ``` @@ -141,7 +141,7 @@ The first thing we need to do is add `rules.yml` file to our Prometheus configur Let’s create the `rules.yml` file that will give the rules for Alert manager: -``` +```sh filename="nano" copy sudo touch /etc/prometheus/rules.yml sudo nano /etc/prometheus/rules.yml ``` @@ -151,7 +151,7 @@ You can create all kinds of rules that can triggered, for an exhausted list of r Add the following lines and save the file: -``` +```sh filename="group" copy groups: - name: alert_rules rules: @@ -178,13 +178,13 @@ The criteria for triggering an alert are set in the `expr:` part. You can custom Then, check the rules file: -``` +```yaml filename="promtool rules" copy promtool check rules /etc/prometheus/rules.yml ``` And finally, check the Prometheus config file: -``` +```yaml filename="promtool check" copy promtool check config /etc/prometheus/prometheus.yml ``` @@ -212,14 +212,14 @@ The Alert manager config file is used to set the external service that will be c Let’s create the file: -``` +```sh filename="nano" copy sudo touch /etc/alertmanager/alertmanager.yml sudo nano /etc/alertmanager/alertmanager.yml ``` And add the Gmail configuration to it and save the file: -``` +```sh filename="Gmail config" copy global: resolve_timeout: 1m @@ -285,14 +285,8 @@ Of course, you have to change the email addresses and the auth_password with the Create and open the Alert manager service file: -``` -sudo touch /etc/systemd/system/alertmanager.service && -sudo nano /etc/systemd/system/alertmanager.service -``` - -Add the following lines: - -``` +```sh filename="create service" copy +sudo tee /etc/systemd/system/alertmanager.service > /dev/null << EOF [Unit] Description=AlertManager Server Service Wants=network-online.target @@ -310,13 +304,14 @@ Add the following lines: [Install] WantedBy=multi-user.target +EOF ``` ## Starting the Services Launch a daemon reload to take the services into account in systemd: -``` +```sh filename="daemon-reload" copy sudo systemctl daemon-reload ``` @@ -324,7 +319,7 @@ Next, we will want to start the alertManager service: **alertManager**: -``` +```sh filename="start service" copy sudo systemctl start alertmanager.service ``` @@ -332,15 +327,15 @@ And check that they are working fine: **alertManager**:: -``` -systemctl status alertmanager.service +```sh filename="status" copy +sudo systemctl status alertmanager.service ``` If everything is working adequately, activate the services! **alertManager**: -``` +```sh filename="enable" copy sudo systemctl enable alertmanager.service ``` diff --git a/pages/docs/ecosystem-roles/validator/monitoring/grafana.mdx b/pages/docs/ecosystem-roles/validator/monitoring/grafana.mdx index 9d8ab4b0..916cb9ac 100644 --- a/pages/docs/ecosystem-roles/validator/monitoring/grafana.mdx +++ b/pages/docs/ecosystem-roles/validator/monitoring/grafana.mdx @@ -36,7 +36,7 @@ Let's first start by downloading the latest releases of the above mentioned modu - ``` + ```sh filename="brew" copy brew update brew install grafana ``` @@ -44,7 +44,7 @@ Let's first start by downloading the latest releases of the above mentioned modu - ``` + ```sh filename="linux" copy sudo apt-get install -y apt-transport-https sudo apt-get install -y software-properties-common wget wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add - @@ -62,25 +62,25 @@ Let's first start by downloading the latest releases of the above mentioned modu please visit the offical docs [here](https://grafana.com/docs/grafana/v9.0/setup-grafana/installation/mac/). -``` +```sh filename="add-apt" copy sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main" ``` **3. Refresh your APT cache to update your package lists:** -``` +```sh filename="apt update" copy sudo apt update ``` **4. Next, make sure Grafana will be installed from the Grafana repository:** -``` +```sh filename="apt-cache" copy apt-cache policy grafana ``` The output of the previous command tells you the version of Grafana that you are about to install, and where you will retrieve the package from. Verify that the installation candidate at the top of the list will come from the official Grafana repository at `https://packages.grafana.com/oss/deb`. -``` +```sh filename="output" Output of apt-cache policy grafana grafana: Installed: (none) @@ -93,13 +93,13 @@ grafana: **5. You can now proceed with the installation:** -``` +```sh filename="install grafana" copy sudo apt install grafana ``` **6. Install the Alert manager plugin for Grafana:** -``` +```sh filename="grafana-cli" copy sudo grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource ``` @@ -111,25 +111,25 @@ The Grafana’s service is automatically created during extraction of the deb pa Launch a daemon reload to take the services into account in systemd: -``` +```sh filename="daemon-reload" copy sudo systemctl daemon-reload ``` **Start the Grafana service:** -``` +```sh filename="start service" copy sudo systemctl start grafana-server ``` And check that they are working fine, one by one: -``` +```sh filename="status" copy systemctl status grafana-server ``` If everything is working adequately, activate the services! -``` +```sh filename="enable" copy sudo systemctl enable grafana-server ``` diff --git a/pages/docs/ecosystem-roles/validator/monitoring/loki.mdx b/pages/docs/ecosystem-roles/validator/monitoring/loki.mdx index 79701e13..31d92fa6 100644 --- a/pages/docs/ecosystem-roles/validator/monitoring/loki.mdx +++ b/pages/docs/ecosystem-roles/validator/monitoring/loki.mdx @@ -28,11 +28,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-darwin-amd64.zip" ``` ARM version: - ```bash + ```sh filename="ARM" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-darwin-arm64.zip" ``` @@ -40,11 +40,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-linux-amd64.zip" ``` ARM version: - ```bash + ```sh filename="ARM" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-linux-arm64.zip" ``` @@ -54,7 +54,7 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-windows-amd64.exe.zip" ``` @@ -67,11 +67,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-darwin-amd64.zip" ``` ARM version: - ```bash + ```sh filename="ARM" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-darwin-arm64.zip" ``` @@ -79,11 +79,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-linux-amd64.zip" ``` ARM version: - ```bash + ```sh filename="ARM" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-linux-arm64.zip" ``` @@ -91,7 +91,7 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-windows-amd64.exe.zip" ``` @@ -100,14 +100,14 @@ Let's first start by downloading the latest releases of the above mentioned modu **3. Extract the Downloaded Files:** -``` +```sh filename="unzip" copy unzip "loki-linux-amd64.zip" && unzip "promtail-linux-amd64.zip" ``` **4. Copy the Extracted Files into `/usr/local/bin`:** -``` +```sh filename="cp" copy sudo cp loki-linux-amd64 /usr/local/bin/ && sudo cp promtail-linux-amd64 /usr/local/bin/ ``` @@ -116,14 +116,14 @@ sudo cp promtail-linux-amd64 /usr/local/bin/ Now we want to create dedicated users for each of the modules we have installed: -``` +```sh filename="useradd" copy sudo useradd --no-create-home --shell /usr/sbin/nologin loki && sudo useradd --no-create-home --shell /usr/sbin/nologin promtail ``` **6. Create Directories for `loki`, and `promtail`:** -``` +```sh filename="mkdir" copy sudo mkdir /etc/loki && sudo mkdir /etc/promtail ``` @@ -132,14 +132,14 @@ sudo mkdir /etc/promtail We need to give our user permissions to access these directories: -``` +```sh filename="chown" copy sudo chown loki:loki /usr/local/bin/loki-linux-amd64 && sudo chown promtail:promtail /usr/local/bin/promtail-linux-amd64 ``` **9. Finally, let's clean up these directories:** -``` +```sh filename="rm" copy rm -rf ./loki-linux-amd64* && rm -rf ./promtail-linux-amd64* ``` @@ -157,12 +157,12 @@ There are many other config options for Loki, and you can read more about Loki c Let’s create the file: -``` +```sh filename="nano" copy sudo touch /etc/loki/config.yml sudo nano /etc/loki/config.yml ``` -``` +```yaml filename="config.yaml" copy auth_enabled: false server: @@ -220,12 +220,12 @@ want to pick up. There are many other config options for Promtail, and you can r Let’s create the file: -``` +```sh filename="nano" copy sudo touch /etc/promtail/config.yml sudo nano /etc/promtail/config.yml ``` -``` +```yaml filename="config.yaml" copy server: http_listen_port: 9080 grpc_listen_port: 0 @@ -252,14 +252,8 @@ scrape_configs: Create and open the Loki service file: -``` -sudo touch /etc/systemd/system/loki.service && -sudo nano /etc/systemd/system/loki.service -``` - -Add the following lines: - -``` +```sh filename="loki.service" copy +sudo tee /etc/systemd/system/loki.service > /dev/null << EOF [Unit] Description=Loki Service Wants=network-online.target @@ -273,20 +267,15 @@ Add the following lines: [Install] WantedBy=multi-user.target +EOF ``` ### Promtail Create and open the Promtail service file: -``` -sudo touch /etc/systemd/system/promtail.service && -sudo nano /etc/systemd/system/promtail.service -``` - -Add the following lines: - -``` +```sh filename="promtail.service" copy +sudo tee /etc/systemd/system/promtail.service > /dev/null << EOF [Unit] Description=Promtail Service Wants=network-online.target @@ -300,6 +289,7 @@ Add the following lines: [Install] WantedBy=multi-user.target +EOF ``` Great! You have now configured all the services needed to run Loki. @@ -308,13 +298,13 @@ Great! You have now configured all the services needed to run Loki. Launch a daemon reload to take the services into account in systemd: -``` +```sh filename="daemon-reload" copy sudo systemctl daemon-reload ``` Next, we will want to start each service: -``` +```sh filename="start service" copy sudo systemctl start loki.service && sudo systemctl start promtail.service ``` @@ -323,19 +313,19 @@ And check that they are working fine, one by one: **loki**: -``` +```sh filename="status" copy systemctl status loki.service ``` **promtail**: -``` +```sh filename="status" copy systemctl status promtail.service ``` If everything is working adequately, activate the services! -``` +```sh filename="enable" copy sudo systemctl enable loki.service && sudo systemctl enable promtail.service ``` diff --git a/pages/docs/ecosystem-roles/validator/monitoring/prometheus.mdx b/pages/docs/ecosystem-roles/validator/monitoring/prometheus.mdx index 9cf6eeda..bbcb3f74 100644 --- a/pages/docs/ecosystem-roles/validator/monitoring/prometheus.mdx +++ b/pages/docs/ecosystem-roles/validator/monitoring/prometheus.mdx @@ -40,11 +40,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.darwin-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.darwin-arm64.tar.gz ``` @@ -52,11 +52,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.linux-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.linux-arm64.tar.gz ``` @@ -66,11 +66,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.windows-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.windows-arm64.tar.gz ``` @@ -83,11 +83,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.darwin-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.darwin-arm64.tar.gz ``` @@ -95,11 +95,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.linux-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.linux-arm64.tar.gz ``` @@ -114,11 +114,11 @@ Let's first start by downloading the latest releases of the above mentioned modu AMD version: - ```bash + ```sh filename="AMD" copy wget https://github.com/ncabatoff/process-exporter/releases/download/v0.7.10/process-exporter-0.7.10.linux-amd64.tar.gz ``` ARM version: - ```bash + ```sh filename="ARM" copy wget https://github.com/ncabatoff/process-exporter/releases/download/v0.7.10/process-exporter-0.7.10.linux-arm64.tar.gz ``` @@ -131,7 +131,7 @@ Let's first start by downloading the latest releases of the above mentioned modu Run the following command: -``` +```sh filename="tar" copy tar xvf prometheus-*.tar.gz && tar xvf node_exporter-*.tar.gz && tar xvf process-exporter-*.tar.gz @@ -145,20 +145,20 @@ tar xvf process-exporter-*.tar.gz We are first going to copy the `prometheus` binary: -``` +```sh filename="cp" copy sudo cp ./prometheus-*.linux-amd64/prometheus /usr/local/bin/ ``` Next, we are going to copy over the `prometheus` console libraries: -``` +```sh filename="cp" copy sudo cp -r ./prometheus-*.linux-amd64/consoles /etc/prometheus && sudo cp -r ./prometheus-*.linux-amd64/console_libraries /etc/prometheus ``` We are going to do the same with `node-exporter` and `process-exporter`: -``` +```sh filename="cp" copy sudo cp ./node_exporter-*.linux-amd64/node_exporter /usr/local/bin/ && sudo cp ./process-exporter-*.linux-amd64/process-exporter /usr/local/bin/ ``` @@ -167,7 +167,7 @@ sudo cp ./process-exporter-*.linux-amd64/process-exporter /usr/local/bin/ Now we want to create dedicated users for each of the modules we have installed: -``` +```sh filename="useradd" copy sudo useradd --no-create-home --shell /usr/sbin/nologin prometheus && sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter && sudo useradd --no-create-home --shell /usr/sbin/nologin process-exporter @@ -175,7 +175,7 @@ sudo useradd --no-create-home --shell /usr/sbin/nologin process-exporter **7. Create Directories for `Prometheus`, and `Process exporter`:** -``` +```sh filename="mkdir" copy sudo mkdir /var/lib/prometheus && sudo mkdir /etc/process-exporter ``` @@ -186,7 +186,7 @@ We need to give our user permissions to access these directories: **prometheus**: -``` +```sh filename="chown" copy sudo chown prometheus:prometheus /etc/prometheus/ -R && sudo chown prometheus:prometheus /var/lib/prometheus/ -R && sudo chown prometheus:prometheus /usr/local/bin/prometheus @@ -194,20 +194,20 @@ sudo chown prometheus:prometheus /usr/local/bin/prometheus **node_exporter**: -``` +```sh filename="chwon" copy sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter ``` **process-exporter**: -``` +```sh filename="chown" copy sudo chown process-exporter:process-exporter /etc/process-exporter -R && sudo chown process-exporter:process-exporter /usr/local/bin/process-exporter ``` **9. Finally, let's clean up these directories:** -``` +```sh filename="rm" copy rm -rf ./prometheus* && rm -rf ./node_exporter* && rm -rf ./process-exporter* @@ -223,13 +223,13 @@ If you are interested to see how we configure the Tangle Network nodes for monit Let’s edit the Prometheus config file and add all the modules in it: -``` +```sh filename="nano" copy sudo nano /etc/prometheus/prometheus.yml ``` Add the following code to the file and save: -``` +```yaml filename="promtheus.yml" copy global: scrape_interval: 15s evaluation_interval: 15s @@ -273,14 +273,14 @@ You can notice the first scrap where Prometheus monitors itself. Process exporter needs a config file to be told which processes they should take into account: -``` +```sh filename="nano" copy sudo touch /etc/process-exporter/config.yml sudo nano /etc/process-exporter/config.yml ``` Add the following code to the file and save: -``` +```sh filename="config.yml" copy process_names: - name: "{{.Comm}}" cmdline: @@ -293,14 +293,8 @@ process_names: Create and open the Prometheus service file: -``` -sudo touch /etc/systemd/system/prometheus.service && -sudo nano /etc/systemd/system/prometheus.service -``` - -Add the following lines: - -``` +```sh filename="promtheus.service" copy +sudo tee /etc/systemd/system/prometheus.service > /dev/null << EOF [Unit] Description=Prometheus Monitoring Wants=network-online.target @@ -319,20 +313,15 @@ Add the following lines: [Install] WantedBy=multi-user.target +EOF ``` ### Node exporter Create and open the Node exporter service file: -``` -sudo touch /etc/systemd/system/node_exporter.service && -sudo nano /etc/systemd/system/node_exporter.service -``` - -Add the following lines: - -``` +```sh filename="node_exporter.service" copy +sudo tee /etc/systemd/system/node_exporter.service > /dev/null << EOF [Unit] Description=Node Exporter Wants=network-online.target @@ -346,20 +335,15 @@ Add the following lines: [Install] WantedBy=multi-user.target +EOF ``` ### Process exporter Create and open the Process exporter service file: -``` -sudo touch /etc/systemd/system/process-exporter.service && -sudo nano /etc/systemd/system/process-exporter.service -``` - -Add the following lines: - -``` +```sh filename="process-exporter.service" copy +sudo tee /etc/systemd/system/process-exporter.service > /dev/null << EOF [Unit] Description=Process Exporter Wants=network-online.target @@ -374,13 +358,14 @@ Add the following lines: [Install] WantedBy=multi-user.target +EOF ``` ## Starting the Services Launch a daemon reload to take the services into account in systemd: -``` +```sh filename="deamon-reload" copy sudo systemctl daemon-reload ``` @@ -388,19 +373,19 @@ Next, we will want to start each service: **prometheus**: -``` +```sh filename="start serive" copy sudo systemctl start prometheus.service ``` **node_exporter**: -``` +```sh filename="start serive" copy sudo systemctl start node_exporter.service ``` **process-exporter**: -``` +```sh filename="start serive" copy sudo systemctl start process-exporter.service ``` @@ -408,19 +393,19 @@ And check that they are working fine: **prometheus**: -``` +```sh filename="status" copy systemctl status prometheus.service ``` **node_exporter**: -``` +```sh filename="status" copy systemctl status node_exporter.service ``` **process-exporter**: -``` +```sh filename="status" copy systemctl status process-exporter.service ``` @@ -428,19 +413,19 @@ If everything is working adequately, activate the services! **prometheus**: -``` +```sh filename="enable" copy sudo systemctl enable prometheus.service ``` **node_exporter**: -``` +```sh filename="enable" copy sudo systemctl enable node_exporter.service ``` **process-exporter**: -``` +```sh filename="enable" copy sudo systemctl enable process-exporter.service ``` diff --git a/pages/docs/ecosystem-roles/validator/monitoring/quickstart.mdx b/pages/docs/ecosystem-roles/validator/monitoring/quickstart.mdx index d89d325e..a39eae5b 100644 --- a/pages/docs/ecosystem-roles/validator/monitoring/quickstart.mdx +++ b/pages/docs/ecosystem-roles/validator/monitoring/quickstart.mdx @@ -49,7 +49,7 @@ files assume a linux environment. Refer [this](https://stackoverflow.com/questio **To start the monitoring stack, run:** -```bash +```sh filename="compose up" copy cd monitoring docker compose up -d ``` diff --git a/pages/docs/ecosystem-roles/validator/required-keys.mdx b/pages/docs/ecosystem-roles/validator/required-keys.mdx index c4674d74..8abb1604 100644 --- a/pages/docs/ecosystem-roles/validator/required-keys.mdx +++ b/pages/docs/ecosystem-roles/validator/required-keys.mdx @@ -29,7 +29,7 @@ subkey before running the command. **Once installed, to generate the DKG key you can run the following:** -``` +```sh filename="DKG Key" copy tangle-standalone key insert --base-path /tangle-data \ --chain "" \ --scheme Ecdsa \ @@ -39,7 +39,7 @@ tangle-standalone key insert --base-path /tangle-data \ **To generate the Aura key you can run the following:** -``` +```sh filename="Aura Key" copy tangle-standalone key insert --base-path /tangle-data \ --chain "" \ --scheme Sr25519 \ @@ -49,7 +49,7 @@ tangle-standalone key insert --base-path /tangle-data \ **To generate the Account key you can run the following:** -``` +```sh filename="Account Key" copy tangle-standalone key insert --base-path /tangle-data \ --chain "" \ --scheme Sr25519 \ @@ -59,7 +59,7 @@ tangle-standalone key insert --base-path /tangle-data \ **To generate the Imonline key you can run the following:** -``` +```sh filename="Imonline Key" copy tangle-standalone key insert --base-path /tangle-data \ --chain "" \ --scheme Sr25519 \ @@ -71,7 +71,7 @@ tangle-standalone key insert --base-path /tangle-data \ You can begin syncing your node by running the following command: -``` +```sh filename="Syncing node" copy ./target/release/tangle-parachain ``` @@ -104,7 +104,7 @@ Once everything is filled in properly, click Bond and sign the transaction with Operators need to set their `Author` session keys. Run the following command to author session keys. **Note:** You may need to change `http://localhost:9933` to your correct address. -``` +```sh filename="Generate session key" copy curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys", "params":[]}' http://localhost:9933 ``` diff --git a/pages/docs/ecosystem-roles/validator/requirements.mdx b/pages/docs/ecosystem-roles/validator/requirements.mdx index 568f7791..0650ca60 100644 --- a/pages/docs/ecosystem-roles/validator/requirements.mdx +++ b/pages/docs/ecosystem-roles/validator/requirements.mdx @@ -46,7 +46,7 @@ compile a Tangle node. First install and configure `rustup`: -```bash +```sh filename="Install Rust" copy # Install curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh @@ -56,7 +56,7 @@ source ~/.cargo/env Configure the Rust toolchain to default to the latest stable version, add nightly and the nightly wasm target: -```bash +```sh filename="Configure Rust" copy rustup default nightly rustup update rustup update nightly @@ -71,20 +71,20 @@ Great! Now your Rust environment is ready! 🚀🚀 Debian version: - ```bash + ```sh filename=" Debian" copy sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler ``` Arch version: - ```bash + ```sh filename="Arch" copy pacman -Syu --needed --noconfirm curl git clang make protobuf ``` Fedora version: - ```bash + ```sh filename="Fedora" copy sudo dnf update sudo dnf install clang curl git openssl-devel make protobuf-compiler ``` Opensuse version: - ```bash + ```sh filename="Opensuse" copy sudo zypper install clang curl git openssl-devel llvm-devel libudev-devel make protobuf ``` @@ -98,7 +98,7 @@ Great! Now your Rust environment is ready! 🚀🚀 Assumes user has Homebrew already installed. - ``` + ```sh filename="Brew" copy brew update brew install openssl gmp protobuf cmake ``` @@ -107,7 +107,7 @@ Great! Now your Rust environment is ready! 🚀🚀 For Windows users please refer to the official Substrate documentation: - `https://docs.substrate.io/install/windows/` + [Windows](https://docs.substrate.io/install/windows/) @@ -116,7 +116,11 @@ Great! Now your Rust environment is ready! 🚀🚀 Once the development environment is set up, you can build the Tangle node from source. -```bash +```sh filename="Clone repo" copy +git clone https://github.com/webb-tools/tangle.git +``` + +```sh filename="Build" copy cargo build --release ``` @@ -129,27 +133,27 @@ You will now have the `tangle-standalone` binary built in `target/release/` dir Some features of tangle node are setup behind feature flags, to enable these features you will have to build the binary with these flags enabled -1. `txpool` +1. **txpool** This feature flag is useful to help trace and debug evm transactions on the chain, you should build node with this flag if you intend to use the node for any evm transaction following -```bash +```sh filename="Build txpool" copy cargo build --release --features txpool ``` -2. `relayer` +2. **relayer** This feature flag is used to start the embedded tx relayer with tangle node, you should build node with this flag if you intend to run a node with a relayer which can be used for transaction relaying or data querying -```bash +```sh filename="Build relayer" copy cargo build --release --features relayer ``` -3. `light-client` +3. **light-client** This feature flag is used to start the embedded light client with tangle node, you should build node with this flag if you intend to run a node with a light client relayer to sync EVM data on Tangle -```bash +```sh filename="Build light" copy cargo build --release --features light-client ``` @@ -162,24 +166,24 @@ In the below commands, substiture `LATEST_RELEASE` with the version you want to ### Get tangle binary -``` +```sh filename="Get binary" copy wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-linux-amd64 ``` ### Get tangle binary with txpool feature -``` +```sh filename="Get binary txpool" copy wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-txpool-linux-amd64 ``` ### Get tangle binary with relayer feature -``` +```sh filename="Get binary relayer" copy wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-relayer-linux-amd64 ``` ### Get tangle binary with light-client feature -``` +```sh filename="Get binary light" copy wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-light-client-linux-amd64 ``` diff --git a/pages/docs/ecosystem-roles/validator/systemd/full-node.mdx b/pages/docs/ecosystem-roles/validator/systemd/full-node.mdx index a304b80d..745890f3 100644 --- a/pages/docs/ecosystem-roles/validator/systemd/full-node.mdx +++ b/pages/docs/ecosystem-roles/validator/systemd/full-node.mdx @@ -15,15 +15,9 @@ compiled the Tangle binary. If you have not done so, please refer to the [Requir Run the following commands to create the service configuration file: -``` +```sh filename="mv" copy # Move the tangle-standalone binary to the bin directory (assumes you are in repo root directory) sudo mv ./target/release/tangle-standalone /usr/bin/ - -# navigate to /etc -cd /etc/systemd/system - -# create the service configuration file -sudo touch full.service ``` Add the following contents to the service configuration file. Make sure to replace the **USERNAME** with the username you created in the previous step, add your own node name, and update @@ -33,7 +27,8 @@ any paths or ports to your own preference. **Full Node** -``` +```sh filename="full.service" copy +sudo tee /etc/systemd/system/full.service > /dev/null << EOF [Unit] Description=Tangle Full Node After=network-online.target @@ -55,13 +50,15 @@ ExecStart=/usr/bin/tangle-standalone \ [Install] WantedBy=multi-user.target +EOF ``` **Full Node with evm trace** **Note:** To run with evm trace, you should use a binary built with `txpool` flag, refer [requirements](../requirements.mdx) page for more details. -``` +```sh filename="full.service" copy +sudo tee /etc/systemd/system/full.service > /dev/null << EOF [Unit] Description=Tangle Full Node After=network-online.target @@ -82,6 +79,7 @@ ExecStart=/usr/bin/tangle-standalone \ [Install] WantedBy=multi-user.target +EOF ``` ### Enable the services @@ -89,23 +87,23 @@ WantedBy=multi-user.target Double check that the config has been written to `/etc/systemd/system/full.service` correctly. If so, enable the service so it runs on startup, and then try to start it now: -``` +```sh filename="enable service" copy +sudo systemctl daemon-reload sudo systemctl enable full - sudo systemctl start full ``` Check the status of the service: -``` -systemctl status full +```sh filename="status" copy +sudo systemctl status full ``` You should see the node connecting to the network and syncing the latest blocks. If you need to tail the latest output, you can use: -``` -journalctl -u full.service -f +```sh filename="logs" copy +sudo journalctl -u full.service -f ``` Congratulations! You have officially setup a Tangle Network node using Systemd. If you are interested diff --git a/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx b/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx index f8412af1..9627c4cb 100644 --- a/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx +++ b/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx @@ -29,62 +29,62 @@ should paste your SURI when the command asks for it. **Account Keys** -``` +```sh filename="Acco" copy # it will ask for your suri, enter it. ./target/release/tangle-standalone key insert --base-path /data/validator/ \ --chain ./chainspecs/tangle-standalone.json \ --scheme Sr25519 \ ---suri \ +--suri <"12-MNEMONIC-PHARSE"> \ --key-type acco ``` **Aura Keys** -``` +```sh filename="Aura" copy # it will ask for your suri, enter it. ./target/release/tangle-standalone key insert --base-path /data/validator/ \ --chain ./chainspecs/tangle-standalone.json \ --scheme Sr25519 \ ---suri \ +--suri <"12-MNEMONIC-PHARSE"> \ --key-type aura ``` **Im-online Keys** - **these keys are optional** -``` +```sh filename="Imonline" copy # it will ask for your suri, enter it. ./target/release/tangle-standalone key insert --base-path /data/validator/ \ --chain ./chainspecs/tangle-standalone.json \ --scheme Sr25519 \ ---suri \ +--suri <"12-MNEMONIC-PHARSE"> \ --key-type imon ``` **DKG Keys** -``` +```sh filename="DKG" copy # it will ask for your suri, enter it. ./target/release/tangle-standalone key insert --base-path /data/validator/ \ --chain ./chainspecs/tangle-standalone.json \ --scheme Ecdsa \ ---suri \ +--suri <"12-MNEMONIC-PHARSE"> \ --key-type wdkg ``` **Grandpa Keys** -``` +```sh filename="Grandpa" copy # it will ask for your suri, enter it. ./target/release/tangle-standalone key insert --base-path /data/validator/ \ --chain ./chainspecs/tangle-standalone.json \ --scheme Ed25519 \ ---suri \ +--suri <"12-MNEMONIC-PHARSE"> \ --key-type gran ``` To ensure you have successfully generated the keys correctly run: -``` +```sh filename="ls" copy ls ~/data/validator//keystore # You should see a some file(s) there, these are the keys. ``` @@ -93,15 +93,9 @@ ls ~/data/validator//keystore Run the following commands to create the service configuration file: -``` +```sh filename="mv" copy # Move the tangle-standalone binary to the bin directory (assumes you are in repo root directory) sudo mv ./target/release/tangle-standalone /usr/bin/ - -# navigate to /etc -cd /etc/systemd/system - -# create the service configuration file -sudo touch validator.service ``` Add the following contents to the service configuration file. Make sure to replace the **USERNAME** with the username you created in the previous step, add your own node name, and update any paths or ports to your own preference. @@ -113,7 +107,8 @@ if you want the node to auto generate the keys, add the `--auto-insert-keys` fla **Validator Node** -``` +```sh filename="validator.service" copy +sudo tee /etc/systemd/system/validator.service > /dev/null << EOF [Unit] Description=Tangle Validator Node After=network-online.target @@ -128,13 +123,14 @@ ExecStart=/usr/bin/tangle-standalone \ --name \ --chain tangle-testnet \ --node-key-file "/home//node-key" \ - --port 9944 \ + --port 30333 \ --validator \ --no-mdns \ --telemetry-url "wss://telemetry.polkadot.io/submit/ 0" --name [Install] WantedBy=multi-user.target +EOF ``` ### Enable the services @@ -142,28 +138,28 @@ WantedBy=multi-user.target Double check that the config has been written to `/etc/systemd/system/validator.service` correctly. If so, enable the service so it runs on startup, and then try to start it now: -``` +```sh filename="enable service" copy +sudo systemctl daemon-reload sudo systemctl enable validator - sudo systemctl start validator ``` Check the status of the service: -``` -systemctl status validator +```sh filename="status" copy +sudo systemctl status validator ``` You should see the node connecting to the network and syncing the latest blocks. If you need to tail the latest output, you can use: -``` -journalctl -u validator.service -f +```sh filename="logs" copy +sudo journalctl -u validator.service -f ``` If the node is running correctly, you should see an output similar to below: -``` +```sh filename="output" 2023-03-22 14:55:51 Tangle Standalone Node 2023-03-22 14:55:51 ✌️ version 0.1.15-54624e3-aarch64-macos 2023-03-22 14:55:51 ❤️ by Webb Technologies Inc., 2017-2023 @@ -200,7 +196,7 @@ After a validator node is started, it will start syncing with the current chain Example of node sync : -``` +```sh filename="output after synced" copy 2021-06-17 03:07:39 🔍 Discovered new external address for our node: /ip4/10.26.16.1/tcp/30333/ws/p2p/12D3KooWLtXFWf1oGrnxMGmPKPW54xWCHAXHbFh4Eap6KXmxoi9u 2021-06-17 03:07:40 ⚙️ Syncing 218.8 bps, target=#5553764 (17 peers), best: #24034 (0x08af…dcf5), finalized #23552 (0xd4f0…2642), ⬇ 173.5kiB/s ⬆ 12.7kiB/s 2021-06-17 03:07:45 ⚙️ Syncing 214.8 bps, target=#5553765 (20 peers), best: #25108 (0xb272…e800), finalized #25088 (0x94e6…8a9f), ⬇ 134.3kiB/s ⬆ 7.4kiB/s diff --git a/pages/docs/ecosystem-roles/validator/troubleshooting.mdx b/pages/docs/ecosystem-roles/validator/troubleshooting.mdx index 40e1d306..5ebbeeac 100644 --- a/pages/docs/ecosystem-roles/validator/troubleshooting.mdx +++ b/pages/docs/ecosystem-roles/validator/troubleshooting.mdx @@ -40,13 +40,13 @@ This typically means that you are running an older version and will need to upgr Install Homebrew if you have not already. You can check if you have it installed with the following command: -```bash +```sh filename="brew" copy brew help ``` If you do not have it installed open the Terminal application and execute the following commands: -```bash +```sh filename="install brew" copy # Install Homebrew if necessary https://brew.sh/ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" @@ -57,13 +57,13 @@ brew install openssl ❗ **Note:** Native ARM Homebrew installations are only going to be supported at `/opt/homebrew`. After Homebrew installs, make sure to add `/opt/homebrew/bin` to your PATH. -``` +```sh filename="add PATH" copy echo 'export PATH=/opt/homebrew/bin:$PATH' >> ~/.bash_profile ``` An example `bash_profile` for reference may look like the following: -``` +```sh filename="export PATH" copy export PATH=/opt/homebrew/bin:$PATH export PATH=/opt/homebrew/opt/llvm/bin:$PATH export CC=/opt/homebrew/opt/llvm/bin/clang @@ -75,13 +75,13 @@ export RUSTFLAGS='-L /opt/homebrew/lib' In order to build **dkg-substrate** in `--release` mode using `aarch64-apple-darwin` Rust toolchain you need to set the following environment variables: -```bash +```sh filename="export" copy echo 'export RUSTFLAGS="-L /opt/homebrew/lib"' >> ~/.bash_profile ``` Ensure `gmp` dependency is installed correctly. -``` +```sh filename="install gmp" copy brew install gmp ``` @@ -89,13 +89,13 @@ If you are still receiving an issue with `gmp`, you may need to adjust your path Run: -``` +```sh filename="clean" copy cargo clean ``` Then: -``` +```sh filename="export" copy export LIBRARY_PATH=$LIBRARY_PATH:$(brew --prefix)/lib:$(brew --prefix)/opt/gmp/lib ``` @@ -103,6 +103,6 @@ This should be added to your bash_profile as well. Ensure `protobuf` dependency is installed correctly. -``` +```sh filename="install protobuf" copy brew install protobuf ```