diff --git a/components/QuickStart.tsx b/components/QuickStart.tsx index 6b01acba..ee8ffa68 100644 --- a/components/QuickStart.tsx +++ b/components/QuickStart.tsx @@ -18,7 +18,7 @@ export const QuickStartArea = () => { description: `Start your journey on Tangle Network. This guide will walk you through the steps to become a validator, ensuring network security and integrity.`, name: "Validate on Tangle Network", }} - href="/docs/ecosystem-roles/validator/quickstart/" + href="/docs/tangle-network/validator/quickstart/" > Before following this guide you should have already set up your machines environment, installed the dependencies, and compiled the Tangle binary. If you have not done so, please refer to the [Requirements](/docs/ecosystem-roles/validator/requirements/) page. - ## Standalone Testnet ### 1. Fetch the tangle binary @@ -82,10 +81,10 @@ If the node is running correctly, you should see an output similar to below: 2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.88.12/tcp/30305 ``` -**Note** : Since the `--auto-insert-keys` flag was used the logs will print out the keys automatically generated for you, +**Note** : Since the `--auto-insert-keys` flag was used the logs will print out the keys automatically generated for you, make sure to note down and keep this safely, in case you need to migrate or restart your node, these keys are essential. Congratulations! You have officially setup an Tangle Network node. The quickstart is only meant as a quickstart for anyone looking to run a tangle node with minimal -config, this guide uses automated keys and it is not recommended to run a validator using this setup long term, refer to [advanced](/docs/ecosystem-roles/validator/systemd/validator-node/) guide +config, this guide uses automated keys and it is not recommended to run a validator using this setup long term, refer to [advanced](/docs/ecosystem-roles/validator/systemd/validator-node/) guide for a more secure long term setup.. If you are interested in learning how to setup monitoring for your node, please refer to the [monitoring](../monitoring/quickstart.mdx) page. diff --git a/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx b/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx index 9627c4cb..4cbd66f3 100644 --- a/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx +++ b/pages/docs/ecosystem-roles/validator/systemd/validator-node.mdx @@ -11,7 +11,7 @@ or crashes (and helps to avoid getting slashed!). Before following this guide you should have already set up your machines environment, installed the dependencies, and compiled the Tangle binary. If you have not done so, please refer to the [Requirements](/docs/ecosystem-roles/validator/requirements/) page. -## Standalone Testnet +## Standalone Testnet ### Generate and store keys diff --git a/pages/docs/tangle-network/_meta.json b/pages/docs/tangle-network/_meta.json index 0e613852..048cf0c0 100644 --- a/pages/docs/tangle-network/_meta.json +++ b/pages/docs/tangle-network/_meta.json @@ -7,6 +7,7 @@ "json-rpc-endpoints": "JSON-RPC Endpoints", "understanding-dkg-tangle": "Distributed Key Generation", "governance": "Onchain Governance", + "validator": "Validate", "incentives": "Incentives", "community": "Team and Community" } diff --git a/pages/docs/tangle-network/validator/_meta.json b/pages/docs/tangle-network/validator/_meta.json new file mode 100644 index 00000000..d0b606d8 --- /dev/null +++ b/pages/docs/tangle-network/validator/_meta.json @@ -0,0 +1,12 @@ +{ + "quickstart": "Quickstart", + "validation": "Validator Overview", + "validator-rewards": "Validator Rewards", + "required-keys": "Required Keys", + "requirements": "Hardware Requirements", + "deploy-with-docker": "Deploying with Docker", + "systemd": "Running with Systemd", + "monitoring": "Node Monitoring", + "api-reference": "API Reference", + "troubleshooting": "Troubleshooting" +} diff --git a/pages/docs/tangle-network/validator/api-reference/_meta.json b/pages/docs/tangle-network/validator/api-reference/_meta.json new file mode 100644 index 00000000..47ee08e8 --- /dev/null +++ b/pages/docs/tangle-network/validator/api-reference/_meta.json @@ -0,0 +1,3 @@ +{ + "cli": "CLI Options" +} diff --git a/pages/docs/tangle-network/validator/api-reference/cli.mdx b/pages/docs/tangle-network/validator/api-reference/cli.mdx new file mode 100644 index 00000000..27bb4151 --- /dev/null +++ b/pages/docs/tangle-network/validator/api-reference/cli.mdx @@ -0,0 +1,409 @@ +--- +title: CLI Reference +description: Explore Webb's command line interface. +--- + +import { Tabs, Tab } from "../../../../../components/Tabs"; + +# Command-Line Reference + +When starting up your own Tangle node, there are some required and optional flags that can be used. + +This page outlines the most common flags used, for an exhaustive documentation of all available flags and options please refer to +the official Substrate documentation [here](https://substrate.io/), as well as, a list of out of the box command line tools that +ships with all Substrate based nodes including the Tangle node [here](https://docs.substrate.io/reference/command-line-tools/). + +After installing the [`tangle`](/repo/docs/getting-started/add-to-project) binary (or pull from Docker), you can view the Tangle command line interface (CLI). For a complete list of the available flags, you can spin up your Tangle node with `--help` +added to the end of the command. The command will vary depending on how you choose to spin up your node, and if you're using Docker or Systemd. + + + + + ```sh filename="help" copy + docker run --platform linux/amd64 --network="host" -v "/var/lib/data" --entrypoint ./tangle-standalone \ + ghcr.io/webb-tools/tangle/tangle-standalone:main \ + --help + ``` + + + + + ```sh filename="help" copy + # If you used the release binary + ./tangle-standalone --help + + # Or if you compiled the binary + ./target/release/tangle-standalone --help + ``` + + + + +If you have compiled the tangle-parachain binary its important to note that the command-line arguments +provided first will be passed to the parachain node, while the arguments +provided after `--` will be passed to the relay chain node. + +```sh filename="args" copy +tangle-parachain -- +``` + +USAGE: + +```sh filename="usage" copy +tangle-parachain [OPTIONS] [-- ...] +tangle-parachain +``` + +## Common Flags + +The below lists the most commonly used flags for your convienance. + +#### `--alice` + +Shortcut for `--name Alice --validator` with session keys for `Alice` added to keystore. Commonly +used for development or local test networks. + +```sh filename="alice" copy +tangle-standalone --alice +``` + +#### `--blocks-pruning ` + +Specify the blocks pruning mode, a number of blocks to keep or 'archive'. + +Default is to keep all finalized blocks. otherwise, all blocks can be kept (i.e +'archive'), or for all canonical blocks (i.e 'archive-canonical'), or for the last N +blocks (i.e a number). + +NOTE: only finalized blocks are subject for removal! + +```sh filename="blocks-pruning" copy +tangle-standalone --blocks-pruning 120 +``` + +#### `--bob` + +Shortcut for `--name Bob --validator` with session keys for `Bob` added to keystore. Commonly +used for development or local test networks. + +```sh filename="bob" copy +tangle-standalone --bob +``` + +#### `--bootnodes` + +Specify a list of bootnodes. + +```sh filename="bootnodes" copy +tangle-standalone --bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWAWueKNxuNwMbAtss3nDTQhMg4gG3XQBnWdQdu2DuEsZS +``` + +#### `--chain ` + +Specify the chain specification. + +It can be one of the predefined ones (dev, local, or staging) or it can be a path to a +file with the chainspec (such as one exported by the `build-spec` subcommand). + +```sh filename="local" copy +tangle-standalone --chain standalone-local +``` + +#### `--charlie` + +Shortcut for `--name Charlie --validator` with session keys for `Charlie` added to keystore. Commonly +used for development or local test networks. + +```sh filename="charlie" copy +tangle-standalone --charlie +``` + +#### `--collator` + +Run node as collator. (Not applicable at this time.) + +Note that this is the same as running with `--validator`. + +```sh filename="collator" copy +tangle-standalone --collator +``` + +#### `-d, --base-path ` + +Specify custom base path. + +```sh filename="base path" copy +tangle-standalone --base-path /data +``` + +#### `--db-cache ` + +Limit the memory the database cache can use + +```sh filename="db-cache" copy +tangle-standalone --db-cache 128 +``` + +#### `--detailed-log-output` + +Enable detailed log output. + +This includes displaying the log target, log level and thread name. + +This is automatically enabled when something is logged with any higher level than +`info`. + +```sh filename="log-output" copy +tangle-standalone --detailed-log-output +``` + +#### `--dev` + +Specify the development chain. + +This flag sets `--chain=dev`, `--force-authoring`, `--rpc-cors=all`, `--alice`, and +`--tmp` flags, unless explicitly overridden. + +```sh filename="dev" copy +tangle-standalone --dev +``` + +#### `--execution ` + +The execution strategy that should be used by all execution contexts + +[possible values: native, wasm, both, native-else-wasm] + +`native` - only execute with the native build +`wasm` - only execute with the Wasm build +`both` - execute with both native and Wasm builds +`nativeelsewasm` - execute with the native build if possible and if it fails, then execute with Wasm + +```sh filename="wasm" copy +tangle-standalone --execution wasm +``` + +#### `--force-authoring` + +Enable authoring even when offline + +```sh filename="authoring" copy +tangle-parachain --force-authoring +``` + +#### `--keystore-path ` + +Specify custom keystore path + +```sh filename="keystore path" copy +tangle-standalone --keystore-path /tmp/chain/data/ +``` + +#### `--keystore-uri ` + +Specify custom URIs to connect to for keystore-services + +```sh filename="keystore url" copy +tangle-standalone --keystore-uri foo://example.com:8042/over/ +``` + +#### `--name ` + +The human-readable name for this node. + +The node name will be reported to the telemetry server, if enabled. + +```sh filename="name" copy +tangle-standalone --name zeus +``` + +#### `--node-key ` + +The secret key to use for libp2p networking. + +The value is a string that is parsed according to the choice of `--node-key-type` as +follows: + +`ed25519`: The value is parsed as a hex-encoded Ed25519 32 byte secret key, i.e. 64 hex +characters. + +The value of this option takes precedence over `--node-key-file`. + +WARNING: Secrets provided as command-line arguments are easily exposed. Use of this +option should be limited to development and testing. To use an externally managed secret +key, use `--node-key-file` instead. + +```sh filename="node-key" copy +tangle-standalone --node-key b6806626f5e4490c27a4ccffed4fed513539b6a455b14b32f58878cf7c5c4e68 +``` + +#### `--node-key-file ` + +The file from which to read the node's secret key to use for libp2p networking. + +The contents of the file are parsed according to the choice of `--node-key-type` as +follows: + +`ed25519`: The file must contain an unencoded 32 byte or hex encoded Ed25519 secret key. + +If the file does not exist, it is created with a newly generated secret key of the +chosen type. + +```sh filename="node-key-file" copy +tangle-standalone --node-key-file ./node-keys-file/ +``` + +#### `--port ` + +Specify p2p protocol TCP port + +```sh filename="port" copy +tangle-standalone --port 9944 +``` + +#### `--prometheus-external` + +Expose Prometheus exporter on all interfaces. + +Default is local. + +```sh filename="prometheus" copy +tangle-standalone --prometheus-external +``` + +#### `--prometheus-port ` + +Specify Prometheus exporter TCP Port + +```sh filename="prometheus-port" copy +tangle-standalone --prometheus-port 9090 +``` + +#### `--rpc-cors ` + +Specify browser Origins allowed to access the HTTP & WS RPC servers. + +A comma-separated list of origins (protocol://domain or special `null` value). Value of +`all` will disable origin validation. Default is to allow localhost and +https://polkadot.js.org origins. When running in --dev mode the default is to allow all origins. + +```sh filename="rpc-cors" copy +tangle-standalone --rpc-cors "*" +``` + +#### `--rpc-external` + +Listen to all RPC interfaces. + +Default is local. Note: not all RPC methods are safe to be exposed publicly. Use an RPC +proxy server to filter out dangerous methods. More details: +https://docs.substrate.io/main-docs/build/custom-rpc/#public-rpcs. Use +`--unsafe-rpc-external` to suppress the warning if you understand the risks. + +```sh filename="rpc-external" copy +tangle-standalone --rpc-external +``` + +#### `--rpc-port ` + +Specify HTTP RPC server TCP port + +```sh filename="rpc-port" copy +tangle-standalone --rpc-port 9933 +``` + +#### `--state-pruning ` + +Specify the state pruning mode, a number of blocks to keep or 'archive'. + +Default is to keep only the last 256 blocks, otherwise, the state can be kept for all of +the blocks (i.e 'archive'), or for all of the canonical blocks (i.e +'archive-canonical'). + +```sh filename="state-pruning" copy +tangle-standalone --state-pruning 128 +``` + +#### `--telemetry-url ` + +The URL of the telemetry server to connect to. + +This flag can be passed multiple times as a means to specify multiple telemetry +endpoints. Verbosity levels range from 0-9, with 0 denoting the least verbosity. +Expected format is 'URL VERBOSITY'. + +```sh filename="wss" copy +tangle-standalone --telemetry-url 'wss://foo/bar 0' +``` + +#### `--validator` + +Enable validator mode. + +The node will be started with the authority role and actively participate in any +consensus task that it can (e.g. depending on availability of local keys). + +```sh filename="validator" copy +tangle-standalone --validator +``` + +#### `--wasm-execution ` + +Method for executing Wasm runtime code + +[default: compiled] +[possible values: interpreted-i-know-what-i-do, compiled] + +`compiled` - this is the default and uses the Wasmtime compiled runtime +`interpreted-i-know-what-i-do` - uses the wasmi interpreter + +```sh filename="wasm-execution" copy +tangle-standalone --wasm-execution compiled +``` + +#### `--ws-external` + +Listen to all Websocket interfaces. + +Default is local. Note: not all RPC methods are safe to be exposed publicly. Use an RPC +proxy server to filter out dangerous methods. More details: +https://docs.substrate.io/main-docs/build/custom-rpc/#public-rpcs. Use +`--unsafe-ws-external` to suppress the warning if you understand the risks. + +```sh filename="ws-external" copy +tangle-standalone --ws-external +``` + +#### `--ws-port ` + +Specify WebSockets RPC server TCP port + +```sh filename="ws-port" copy +tangle-standalone --ws-port 9944 +``` + +## Subcommands + +The following subcommands are available: + +USAGE: + +```sh filename="subcommand" copy +tangle-standalone +``` + +| Subcommand | Description | +| -------------------- | --------------------------------------------------------------------------------------------------- | +| benchmark | Sub-commands concerned with benchmarking. The pallet benchmarking moved to the `pallet` sub-command | +| build-spec | Build a chain specification | +| check-block | Validate blocks | +| export-blocks | Export blocks | +| export-genesis-state | Export the genesis state of the standalone node | +| export-genesis-wasm | Export the genesis wasm of the standalone node | +| export-state | Export the state of a given block into a chain spec | +| help | Print this message or the help of the given subcommand(s) | +| import-blocks | Import blocks | +| key | Key management cli utilities | +| purge-chain | Remove the whole chain | +| revert | Revert the chain to a previous state | +| try-runtime | Try some testing command against a specified runtime state | diff --git a/pages/docs/tangle-network/validator/deploy-with-docker/_meta.json b/pages/docs/tangle-network/validator/deploy-with-docker/_meta.json new file mode 100644 index 00000000..1f5e039a --- /dev/null +++ b/pages/docs/tangle-network/validator/deploy-with-docker/_meta.json @@ -0,0 +1,5 @@ +{ + "full-node": "Full Node", + "validator-node": "Validator Node", + "relayer-node": "Relayer Node" +} diff --git a/pages/docs/tangle-network/validator/deploy-with-docker/full-node.mdx b/pages/docs/tangle-network/validator/deploy-with-docker/full-node.mdx new file mode 100644 index 00000000..e3886ca6 --- /dev/null +++ b/pages/docs/tangle-network/validator/deploy-with-docker/full-node.mdx @@ -0,0 +1,119 @@ +--- +title: Deploying with Docker +description: Deploy a Tangle node with only a few steps. +--- + +import Callout from "../../../../../components/Callout"; + +# Deploying with Docker + +An Tangle node can be spun up quickly using Docker. For more information on installing Docker, +please visit the official Docker [docs](https://docs.docker.com/get-docker/). When connecting to Tangle on Kusama, it will take a few days to completely +sync the embedded relay chain. Make sure that your system meets the requirements which can read [here](https://docs.webb.tools/docs/ecosystem-roles/validator/requirements/). + +## Using Docker + +The quickest and easiest way to get started is to make use of our published Docker Tangle image. In doing so, users simply pull down the image from ghcr.io, +set their keys, fetch the applicable chainspec and run the start command to get up and running. + +### **1. Pull the Tangle Docker image:** + +```sh filename="pull" copy +# Only use "main" if you know what you are doing, it will use the latest and maybe unstable version of the node. + +docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main +``` + +### **2. Create a local directory to store the chain data:** + +Let us create a directory where we will store all the data for our node. This includes the chain data, and logs. + +```sh filename="mkdir" copy +mkdir /var/lib/tangle/ +``` + +### **3. Fetch applicable chainspec(s):** + +To join the Tangle Test network, we need to fetch the appropriate chainspec for the Tangle network. +Download the latest chainspec for standalone testnet: + +```sh filename="get chainspec" copy +# Fetches chainspec for Tangle network +wget https://github.com/webb-tools/tangle/blob/main/chainspecs/tangle-standalone.json +``` + +Please make a reference where you have stored this `json` file as we will need it in the next steps. + +**Note:** Full nodes do not participate in block production or consensus so no required keys are necessary. + +**4. Start Tangle full node:** + +To start the node run the following command: + +```sh filename="docker run" copy +docker run --rm -it -v /var/lib/tangle/:/data ghcr.io/webb-tools/tangle/tangle-standalone:main \ + --chain tangle-testnet \ + --name="YOUR-NODE-NAME" \ + --base-path /data \ + --rpc-cors all \ + --port 9946 \ + --telemetry-url "wss://telemetry.polkadot.io/submit/ 0" --name +``` + + + For an overview of the above flags, please refer to the [CLI Usage](/docs/ecosystem-roles/validator/api-reference/cli/) page of our documentation. + + +Once Docker pulls the necessary images, your Tangle node will start, displaying lots of information, +such as the chain specification, node name, role, genesis state, and more. + +If you followed the installation instructions for Tangle, once synced, you will be connected to peers and see +blocks being produced on the Tangle network! Note that in this case you need to also sync to the Polkadot/Kusama +relay chain, which might take a few days. + +### Update the Client + +As Tangle development continues, it will sometimes be necessary to upgrade your node software. Node operators will be notified +on our Discord channel when upgrades are available and whether they are necessary (some client upgrades are optional). +The upgrade process is straightforward and is the same for a full node. + +1. Stop the docker container: + +```sh filename="docker stop" copy +sudo docker stop `CONTAINER_ID` +``` + +2. Get the latest version of Tangle from the Tangle GitHub Release [page](https://github.com/webb-tools/tangle/pkgs/container/tangle%2Ftangle-standalone) + +3. Pull the latest version of Tangle binary by doing `docker pull ghcr.io/webb-tools/tangle/tangle-standalone:{VERSION_CODE}`. + Example, if the latest version of Tangle is v0.1.2, then the command would be `docker pull ghcr.io/webb-tools/tangle/tangle-standalone:v0.1.12` + +4. Restart the tangle container and you should have the updated version of the client. + +### Purge Your Node + +If you need a fresh instance of your Tangle node, you can purge your node by removing the associated data directory. + +You'll first need to stop the Docker container: + +```sh filename="docker stop" copy +sudo docker stop `CONTAINER_ID` +``` + +If you did not use the `-v` flag to specify a local directory for storing your chain data when you spun up your node, then the data folder is related to the Docker container itself. Therefore, removing the Docker container will remove the chain data. + +If you did spin up your node with the `-v` flag, you will need to purge the specified directory. For example, for the suggested data directly, you can run the following command to purge your parachain node data: + +```sh filename="rm" copy +# purges standalone data +sudo rm -rf /data/chains/* +``` + +If you ran with parachain node you can run the following command to purge your relay-chain node data: + +```sh filename="rm" copy +# purges relay chain data +sudo rm -rf /data/polkadot/* +``` + +Now that your chain data has been purged, you can start a new node with a fresh data directory! diff --git a/pages/docs/tangle-network/validator/deploy-with-docker/relayer-node.mdx b/pages/docs/tangle-network/validator/deploy-with-docker/relayer-node.mdx new file mode 100644 index 00000000..3aade0b0 --- /dev/null +++ b/pages/docs/tangle-network/validator/deploy-with-docker/relayer-node.mdx @@ -0,0 +1,268 @@ +--- +title: Deploying with Docker +description: An overview of Webb Tangle node and Webb Relayer deployment process. +--- + +import Callout from "../../../../../components/Callout"; + +# Deploying Tangle Validator and Relayer + +It is likely that network participants that are running a Tangle validator node may also want to run a relayer node. This guide +will walk you through the process of deploying a Tangle validator and a Webb Relayer. By the end of this document, you will have set up a Webb Relayer +at a publicly accessible endpoint alongside a Tangle validator node both of which will be running within a Docker container. + +## Prerequisites + +It is a requirement to have Docker installed on the Linux machine, for instructions on how to install Docker on the machine +please visit the offical Docker installation documentation [here](https://docs.docker.com/desktop/install/linux-install/). + +When connecting to Tangle on Kusama, it will take a few days to completely +sync the embedded relay chain. Make sure that your system meets the requirements which can read [here](/docs/ecosystem-roles/validator/requirements/). + +## Using Docker Compose + +The quickest and easiest way to get started is to make use of our published Docker Tangle image. In doing so, users simply +create a local directory to store the chain data, download the latest chainspec for standalone testnet, set their keys, and run the start +command to get up and running. + +### **1. Pull the Tangle Docker image:** + +We will used the pre-built Tangle Docker image to generate and insert the required keys for our node. + +```sh filename="pull" copy +# Only use "main" if you know what you are doing, it will use the latest and maybe unstable version of the node. + +docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main +``` + +### **2. Create a local directory to store the chain data:** + +Let us create a directory where we will store all the data for our node. This includes the chain data, keys, and logs. + +```sh filename="mkdir" copy +mkdir /var/lib/tangle/ +``` + +### **3. Generate and store keys:** + +We need to generate the required keys for our node. For more information on these keys, please see the [Required Keys]() section. +The keys we need to generate include the following: + +- DKG key (Ecdsa) +- Aura key (Sr25519) +- Account key (Sr25519) +- Grandpa key (Ed25519) + +Let's now insert our required secret keys, we will not pass the SURI in the command, instead it will be interactive, where you +should paste your SURI when the command asks for it. + +**Account Keys** + +```sh filename="Acco" copy +# it will ask for your suri, enter it. +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + key insert --base-path /var/lib/tangle/ \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Sr25519 \ + --key-type acco +``` + +**Aura Keys** + +```sh filename="Aura" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + key insert --base-path /var/lib/tangle/ \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Sr25519 \ + --key-type aura +``` + +**Im-online Keys** - **these keys are optional** + +```sh filename="Imonline" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + key insert --base-path /var/lib/tangle/ \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Sr25519 \ + --key-type imon +``` + +**DKG Keys** + +```sh filename="DKG" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + tangle-standalone key insert --base-path /data \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Ecdsa \ + --key-type wdkg +``` + +**Grandpa Keys** + +```sh filename="Grandpa" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + tangle-standalone key insert --base-path /data \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Ed25519 \ + --key-type gran +``` + +To ensure you have successfully generated the keys correctly run: + +```sh filename="ls" copy +ls ~/webb/tangle/chains/*/keystore +# You should see a some file(s) there, these are the keys. +``` + +### **4.Creating Docker compose file:** + +Now that we have generated the keys, we can start the Tangle Validator and Relayer. We will use the `docker-compose` file provided +in the [Tangle repo](/docs/ecosystem-roles/validator/deploy-with-docker/relayer-node/). + +Let's start by creating a docker-compose file: + +```sh filename="nano" copy +nano ~/webb/tangle/docker-compose.yml +``` + +Add the following lines: + +```yaml filename="docker-compose.yml" copy +# This an example of a docker compose file which contains both Relayer and Tangle Node. +version: "3" + +services: + webb_relayer: + # Here you should checkout + # https://github.com/webb-tools/relayer/pkgs/container/relayer/versions?filters%5Bversion_type%5D=tagged + # For the latest stable version. Only use "edge" if + # you know what you are doing, it will use the latest and maybe + # unstable version of the relayer. + image: ghcr.io/webb-tools/relayer:${RELAYER_RELEASE_VERSION} + container_name: webb_relayer + env_file: .env + depends_on: + - caddy + ports: + - "$WEBB_PORT:$WEBB_PORT" + volumes: + - $PWD/config:/config + - relayer_data:/store + restart: always + command: /webb-relayer -vvv -c /config + + tangle_standalone: + # Here you should checkout + # https://github.com/webb-tools/tangle/pkgs/container/tangle-standalone/versions?filters%5Bversion_type%5D=tagged + # For the latest stable version. Only use "main" if + # you know what you are doing, it will use the latest and maybe + # unstable version of the node. + image: ghcr.io/webb-tools/tangle/tangle-standalone:${TANGLE_RELEASE_VERSION} + container_name: tangle_standalone + env_file: .env + ports: + - "30333:30333" + - "9933:9933" + - "9944:9944" + - "9615:9615" + volumes: + - tangle_data:/data + restart: always + entrypoint: /tangle-standalone + command: + [ + "--base-path=/data", + "--validator", + "--chain=/data/chainspecs/tangle-standalone.json", + "--", + "--execution=wasm", + ] + +volumes: + relayer_data: + driver: local + driver_opts: + type: none + o: bind + device: $PWD/relayer/data + tangle_data: + driver: local + driver_opts: + type: none + o: bind + device: $PWD/tangle/ +``` + +### **5. Set environment variables:** + +Prior to spinning up the Docker containers, we need to set some environment variables. Below displays an example `.env` file +but you will need to update to reflect your own environment. + +```sh filename="export variables" copy +export TANGLE_RELEASE_VERSION=main +export RELAYER_RELEASE_VERSION=0.5.0-rc1 +export BASE_PATH=/tmp/data/ +export CHAINSPEC_PATH=/tmp/chainspec +export WEBB_PORT=9955 +``` + +### **5. Start Relayer and Validator node:** + +With our keys generated and our docker-compose file created, we can now start the relayer and validator node. + +```sh filename="compose up" copy +docker compose up -d +``` + +The `docker-compose` file will spin up a container running Tangle validator node and another running a Webb Relayer. + +## Update the Client + +As Tangle development continues, it will sometimes be necessary to upgrade your node software. Node operators will be notified +on our Discord channel when upgrades are available and whether they are necessary (some client upgrades are optional). +The upgrade process is straightforward and is the same for a full node or validator. + +1. Stop the docker container: + +```sh filename="docker stop" copy +sudo docker stop `CONTAINER_ID` +``` + +2. Get the latest version of Tangle from the Tangle GitHub Release page + +3. Use the latest version to spin up your node. To do so, replace the version in the Full Node or validator command with the latest and run it + +Once your node is running again, you should see logs in your terminal. + +## Purge Your Node + +If you need a fresh instance of your Tangle node, you can purge your node by removing the associated data directory. + +You'll first need to stop the Docker container: + +```sh filename="docker stop" copy +sudo docker stop `CONTAINER_ID` +``` + +If you did not use the `-v` flag to specify a local directory for storing your chain data when you spun up your node, then the data folder is related to the Docker container itself. Therefore, removing the Docker container will remove the chain data. + +If you did spin up your node with the `-v` flag, you will need to purge the specified directory. For example, for the suggested data directly, you can run the following command to purge your parachain node data: + +```sh filename="rm" copy +# purges standalone data +sudo rm -rf /data/chains/* +``` + +If you ran with parachain node you can run the following command to purge your relay-chain node data: + +```sh filename="rm" copy +# purges relay chain data +sudo rm -rf /data/polkadot/* +``` + +Now that your chain data has been purged, you can start a new node with a fresh data directory! diff --git a/pages/docs/tangle-network/validator/deploy-with-docker/validator-node.mdx b/pages/docs/tangle-network/validator/deploy-with-docker/validator-node.mdx new file mode 100644 index 00000000..a1c18eef --- /dev/null +++ b/pages/docs/tangle-network/validator/deploy-with-docker/validator-node.mdx @@ -0,0 +1,239 @@ +--- +title: Deploying with Docker +description: Deploy a Tangle validator node with only a few steps. +--- + +import Callout from "../../../../../components/Callout"; + +# Deploying with Docker + +A Tangle node can be spun up quickly using Docker. For more information on installing Docker, +please visit the official Docker [docs](https://docs.docker.com/get-docker/). When connecting to Tangle on Kusama, it will take a few days to completely sync the embedded relay chain. Make sure that your system meets the requirements which can read [here](/docs/tangle-network/node-operators/requirements). + +## Standalone Testnet + +### **1. Pull the Tangle Docker image:** + +Although we can make use of the provided `docker-compose` file in the [Tangle repo](https://github.com/webb-tools/tangle/tree/main/docker/tangle-standalone), we pull the `tangle-standalone:main` Docker image from ghcr.io +so that we can generate and insert our required keys before starting the node. + +```sh filename="pull" copy +# Only use "main" if you know what you are doing, it will use the latest and maybe unstable version of the node. + +docker pull ghcr.io/webb-tools/tangle/tangle-standalone:main +``` + +### **2. Create a local directory to store the chain data:** + +Let us create a directory where we will store all the data for our node. This includes the chain data, keys, and logs. + +```sh filename="mkdir" copy +mkdir /var/lib/tangle/ +``` + +### **3. Fetch applicable chainspec(s):** + +To join the Tangle Test network as node operator we need to fetch the appropriate chainspec for the Tangle network. +Download the latest chainspec for standalone testnet: + +```sh filename="get chainspec" copy +# Fetches chainspec for Tangle network +wget https://raw.githubusercontent.com/webb-tools/tangle/main/chainspecs/testnet/tangle-standalone.json +``` + +Please make a reference where you have stored this `json` file as we will need it in the next steps. + +### **4. Generate and store keys:** + +We need to generate the required keys for our node. For more information on these keys, please see the [Required Keys](/docs/ecosystem-roles/validator/required-keys) section. +The keys we need to generate include the following: + +- DKG key (Ecdsa) +- Aura key (Sr25519) +- Account key (Sr25519) +- Grandpa key (Ed25519) +- ImOnline key (Sr25519) + +Let's now insert our required secret keys, we will not pass the SURI in the command, instead it will be interactive, where you +should paste your SURI when the command asks for it. + +**Account Keys** + +```sh filename="Acco" copy +# it will ask for your suri, enter it. +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + key insert --base-path /var/lib/tangle/ \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Sr25519 \ + --key-type acco +``` + +**Aura Keys** + +```sh filename="Aura" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + key insert --base-path /var/lib/tangle/ \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Sr25519 \ + --key-type aura +``` + +**Im-online Keys** - **these keys are optional (required if you are running as a validator)** + +```sh filename="Imonline" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + key insert --base-path /var/lib/tangle/ \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Sr25519 \ + --key-type imon +``` + +**DKG Keys** + +```sh filename="DKG" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + tangle-standalone key insert --base-path /data \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Ecdsa \ + --key-type wdkg +``` + +**Grandpa Keys** + +```sh filename="Grandpa" copy +docker run --rm -it --platform linux/amd64 --network="host" -v "/var/lib/data" \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ + tangle-standalone key insert --base-path /data \ + --chain /data/chainspecs/tangle-standalone.json \ + --scheme Ed25519 \ + --key-type gran +``` + +To ensure you have successfully generated the keys correctly run: + +```sh filename="ls" copy +ls ~/webb/tangle/chains/*/keystore +# You should see a some file(s) there, these are the keys. +``` + +**Caution:** Ensure you insert the keys using the instructions at [generate keys](#generate-and-store-keys), +if you want the node to auto generate the keys, add the `--auto-insert-keys` flag. + +### **5. Start Tangle Validator node:** + +To start the node run the following command: + +```sh filename="docker run" copy +docker run --platform linux/amd64 --network="host" -v "/var/lib/data" --entrypoint ./tangle-standalone \ +ghcr.io/webb-tools/tangle/tangle-standalone:main \ +--base-path=/data \ +--chain tangle-testnet \ +--name="YOUR-NODE-NAME" \ +--execution wasm \ +--wasm-execution compiled \ +--trie-cache-size 0 \ +--validator \ +--telemetry-url "wss://telemetry.polkadot.io/submit/ 0" --name +``` + + + For an overview of the above flags, please refer to the [CLI Usage](/docs/ecosystem-roles/validator/api-reference/cli/) page of our documentation. + + +Once Docker pulls the necessary images, your Tangle node will start, displaying lots of information, +such as the chain specification, node name, role, genesis state, and more. + +If you followed the installation instructions for Tangle, once synced, you will be connected to peers and see +blocks being produced on the Tangle network! + +```sh filename="logs" +2023-03-22 14:55:51 Tangle Standalone Node +2023-03-22 14:55:51 ✌️ version 0.1.15-54624e3-aarch64-macos +2023-03-22 14:55:51 ❤️ by Webb Technologies Inc., 2017-2023 +2023-03-22 14:55:51 📋 Chain specification: Tangle Testnet +2023-03-22 14:55:51 🏷 Node name: cooing-morning-2891 +2023-03-22 14:55:51 👤 Role: FULL +2023-03-22 14:55:51 💾 Database: RocksDb at /Users/local/Library/Application Support/tangle-standalone/chains/local_testnet/db/full +2023-03-22 14:55:51 ⛓ Native runtime: tangle-standalone-115 (tangle-standalone-1.tx1.au1) +2023-03-22 14:55:51 Bn254 x5 w3 params +2023-03-22 14:55:51 [0] 💸 generated 5 npos voters, 5 from validators and 0 nominators +2023-03-22 14:55:51 [0] 💸 generated 5 npos targets +2023-03-22 14:55:51 [0] 💸 generated 5 npos voters, 5 from validators and 0 nominators +2023-03-22 14:55:51 [0] 💸 generated 5 npos targets +2023-03-22 14:55:51 [0] 💸 new validator set of size 5 has been processed for era 1 +2023-03-22 14:55:52 🔨 Initializing Genesis block/state (state: 0xfd16…aefd, header-hash: 0x7c05…a27d) +2023-03-22 14:55:52 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. +2023-03-22 14:55:53 Using default protocol ID "sup" because none is configured in the chain specs +2023-03-22 14:55:53 🏷 Local node identity is: 12D3KooWDaeXbqokqvEMqpJsKBvjt9BUz41uP9tzRkYuky1Wat7Z +2023-03-22 14:55:53 💻 Operating system: macos +2023-03-22 14:55:53 💻 CPU architecture: aarch64 +2023-03-22 14:55:53 📦 Highest known block at #0 +2023-03-22 14:55:53 〽️ Prometheus exporter started at 127.0.0.1:9615 +2023-03-22 14:55:53 Running JSON-RPC HTTP server: addr=127.0.0.1:9933, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] +2023-03-22 14:55:53 Running JSON-RPC WS server: addr=127.0.0.1:9944, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] +2023-03-22 14:55:53 discovered: 12D3KooWMr4L3Dun4BUyp23HZtLfxoQjR56dDp9eH42Va5X6Hfgi /ip4/192.168.0.125/tcp/30304 +2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.0.125/tcp/30305 +2023-03-22 14:55:53 discovered: 12D3KooWMr4L3Dun4BUyp23HZtLfxoQjR56dDp9eH42Va5X6Hfgi /ip4/192.168.88.12/tcp/30304 +2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.88.12/tcp/30305 +``` + +### Run via Docker Compose + +The docker-compose file will spin up a container running Tangle standalone node, but you have to set the following environment variables. Remember to customize your the values depending on your environment and then copy paste this to CLI. + +```sh filename="set variables" copy +RELEASE_VERSION=main +CHAINSPEC_PATH=/tmp/chainspec/ +``` + +After that run: + +```sh filename="compose up" copy +docker compose up -d +``` + +## Update the Client + +As Tangle development continues, it will sometimes be necessary to upgrade your node software. Node operators will be notified +on our Discord channel when upgrades are available and whether they are necessary (some client upgrades are optional). +The upgrade process is straightforward and is the same for a full node. + +1. Stop the docker container: + +```sh filename="docker stop" copy +sudo docker stop `CONTAINER_ID` +``` + +2. Get the latest version of Tangle from the Tangle GitHub Release [page](https://github.com/webb-tools/tangle/pkgs/container/tangle%2Ftangle-standalone) + +3. Pull the latest version of Tangle binary by doing `docker pull ghcr.io/webb-tools/tangle/tangle-standalone:{VERSION_CODE}`. + Example, if the latest version of Tangle is v0.1.2, then the command would be `docker pull ghcr.io/webb-tools/tangle/tangle-standalone:v0.1.12` + +4. Restart the tangle container and you should have the updated version of the client. + +Once your node is running again, you should see logs in your terminal. + +## Purge Your Node + +If you need a fresh instance of your Tangle node, you can purge your node by removing the associated data directory. + +You'll first need to stop the Docker container: + +```sh filename="docker stop" copy +sudo docker stop `CONTAINER_ID` +``` + +If you did not use the `-v` flag to specify a local directory for storing your chain data when you spun up your node, then the data folder is related to the Docker container itself. Therefore, removing the Docker container will remove the chain data. + +If you did spin up your node with the `-v` flag, you will need to purge the specified directory. For example, for the suggested data directly, you can run the following command to purge your standalone node data: + +```sh filename="rm" copy +# purges standalone data +sudo rm -rf /data/chains/* +``` + +Now that your chain data has been purged, you can start a new node with a fresh data directory! diff --git a/pages/docs/tangle-network/validator/monitoring/_meta.json b/pages/docs/tangle-network/validator/monitoring/_meta.json new file mode 100644 index 00000000..e8491a85 --- /dev/null +++ b/pages/docs/tangle-network/validator/monitoring/_meta.json @@ -0,0 +1,7 @@ +{ + "quickstart": "Quickstart", + "prometheus": "Prometheus", + "alert-manager": "AlertManager", + "grafana": "Grafana Dashboard", + "loki": "Loki Log Manager" +} diff --git a/pages/docs/tangle-network/validator/monitoring/alert-manager.mdx b/pages/docs/tangle-network/validator/monitoring/alert-manager.mdx new file mode 100644 index 00000000..a6b4664a --- /dev/null +++ b/pages/docs/tangle-network/validator/monitoring/alert-manager.mdx @@ -0,0 +1,342 @@ +--- +title: Alert Manager Setup +description: Create alerts to notify the team when issues arise. +--- + +import { Tabs, Tab } from "../../../../../components/Tabs"; +import Callout from "../../../../../components/Callout"; + +# Alert Manager Setup + +The following is a guide outlining the steps to setup AlertManager to send alerts when a Tangle node or DKG is being disrupted. If you do not have Tangle node setup yet, please +review the **Tangle Node Quickstart** setup guide [here](/docs/ecosystem-roles/validator/quickstart/). + +In this guide we will configure the following modules to send alerts from a running Tangle node. + +- **Alert Manager** listens to Prometheus metrics and pushes an alert as soon as a threshold is crossed (CPU % usage for example). + +## What is Alert Manager? + +The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, +and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and +inhibition of alerts. To learn more about Alertmanager, please +visit the official docs site [here](https://prometheus.io/docs/alerting/latest/alertmanager/). + +### Getting Started + +Let's first start by downloading the latest releases of the above mentioned modules (Alertmanager). + + + This guide assumes the user has root access to the machine running the Tangle node, and following the below steps inside that machine. As well as, + the user has already configured Prometheus on this machine. + + +**1. Download Alertmanager** + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.darwin-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.darwin-arm64.tar.gz + ``` + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.linux-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.linux-arm64.tar.gz && + ``` + + For other linux distrubutions please visit official release page [here](https://github.com/prometheus/prometheus/releases). + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.windows-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.windows-arm64.tar.gz + ``` + + + + +**2. Extract the Downloaded Files:** + +Run the following command: + +```sh filename="tar" copy +tar xvf alertmanager-*.tar.gz +``` + +**3. Copy the Extracted Files into `/usr/local/bin`:** + + + **Note:** The example below makes use of the `linux-amd64` installations, please update to make use of the target system you have installed. + + +Copy the `alertmanager` binary and `amtool`: + +```sh filename="cp" copy +sudo cp ./alertmanager-*.linux-amd64/alertmanager /usr/local/bin/ && +sudo cp ./alertmanager-*.linux-amd64/amtool /usr/local/bin/ +``` + +**4. Create Dedicated Users:** + +Now we want to create dedicated users for the Alertmanager module we have installed: + +```sh filename="useradd" copy +sudo useradd --no-create-home --shell /usr/sbin/nologin alertmanager +``` + +**5. Create Directories for `Alertmanager`:** + +```sh filename="mkdir" copy +sudo mkdir /etc/alertmanager && +sudo mkdir /var/lib/alertmanager +``` + +**6. Change the Ownership for all Directories:** + +We need to give our user permissions to access these directories: + +**alertManager**: + +```sh filename="chown" copy +sudo chown alertmanager:alertmanager /etc/alertmanager/ -R && +sudo chown alertmanager:alertmanager /var/lib/alertmanager/ -R && +sudo chown alertmanager:alertmanager /usr/local/bin/alertmanager && +sudo chown alertmanager:alertmanager /usr/local/bin/amtool +``` + +**7. Finally, let's clean up these directories:** + +```sh filename="rm" copy +rm -rf ./alertmanager* +``` + +Great! You have now installed and setup your environment. The next series of steps will be configuring the service. + +## Configuration + +If you are interested to see how we configure the Tangle Network nodes for monitoring check out https://github.com/webb-tools/tangle/tree/main/monitoring. + +### Prometheus + +The first thing we need to do is add `rules.yml` file to our Prometheus configuration: + +Let’s create the `rules.yml` file that will give the rules for Alert manager: + +```sh filename="nano" copy +sudo touch /etc/prometheus/rules.yml +sudo nano /etc/prometheus/rules.yml +``` + +We are going to create 2 basic rules that will trigger an alert in case the instance is down or the CPU usage crosses 80%. +You can create all kinds of rules that can triggered, for an exhausted list of rules see our rules list [here](https://github.com/webb-tools/tangle/blob/main/monitoring/prometheus/rules.yml). + +Add the following lines and save the file: + +```sh filename="group" copy +groups: + - name: alert_rules + rules: + - alert: InstanceDown + expr: up == 0 + for: 5m + labels: + severity: critical + annotations: + summary: "Instance $labels.instance down" + description: "[{{ $labels.instance }}] of job [{{ $labels.job }}] has been down for more than 1 minute." + + - alert: HostHighCpuLoad + expr: 100 - (avg by(instance)(rate(node_cpu_seconds_total{mode="idle"}[2m])) * 100) > 80 + for: 0m + labels: + severity: warning + annotations: + summary: Host high CPU load (instance bLd Kusama) + description: "CPU load is > 80%\n VALUE = {{ $value }}\n LABELS: {{ $labels }}" +``` + +The criteria for triggering an alert are set in the `expr:` part. You can customize these triggers as you see fit. + +Then, check the rules file: + +```yaml filename="promtool rules" copy +promtool check rules /etc/prometheus/rules.yml +``` + +And finally, check the Prometheus config file: + +```yaml filename="promtool check" copy +promtool check config /etc/prometheus/prometheus.yml +``` + +### Gmail setup + +We can use a Gmail address to send the alert emails. For that, we will need to generate an app password from our Gmail account. + +Note: we recommend you here to use a dedicated email address for your alerts. Review Google's own guide for +proper set up [here](https://support.google.com/mail/answer/185833?hl=en). + +### Slack notifications + +We can also utilize Slack notifications to send the alerts through. For that we need to a specific Slack channel to send the notifications to, and +to install Incoming WebHooks Slack application. + +To do so, navigate to: + +1. Administration > Manage Apps. +2. Search for "Incoming Webhooks" +3. Install into your Slack workspace. + +### Alertmanager + +The Alert manager config file is used to set the external service that will be called when an alert is triggered. Here, we are going to use the Gmail and Slack notification created previously. + +Let’s create the file: + +```sh filename="nano" copy +sudo touch /etc/alertmanager/alertmanager.yml +sudo nano /etc/alertmanager/alertmanager.yml +``` + +And add the Gmail configuration to it and save the file: + +```sh filename="Gmail config" copy +global: + resolve_timeout: 1m + +route: + receiver: 'gmail-notifications' + +receivers: +- name: 'gmail-notifications' + email_configs: + - to: 'EMAIL-ADDRESS' + from: 'EMAIL-ADDRESS' + smarthost: 'smtp.gmail.com:587' + auth_username: 'EMAIL-ADDRESS' + auth_identity: 'EMAIL-ADDRESS' + auth_password: 'EMAIL-ADDRESS' + send_resolved: true + + +# ******************************************************************************************************************************************** +# Alert Manager for Slack Notifications * +# ******************************************************************************************************************************************** + + global: + resolve_timeout: 1m + slack_api_url: 'INSERT SLACK API URL' + + route: + receiver: 'slack-notifications' + + receivers: + - name: 'slack-notifications' + slack_configs: + - channel: 'channel-name' + send_resolved: true + icon_url: https://avatars3.githubusercontent.com/u/3380462 + title: |- + [{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }} + {{- if gt (len .CommonLabels) (len .GroupLabels) -}} + {{" "}}( + {{- with .CommonLabels.Remove .GroupLabels.Names }} + {{- range $index, $label := .SortedPairs -}} + {{ if $index }}, {{ end }} + {{- $label.Name }}="{{ $label.Value -}}" + {{- end }} + {{- end -}} + ) + {{- end }} + text: >- + {{ range .Alerts -}} + *Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }} + *Description:* {{ .Annotations.description }} + *Details:* + {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` + {{ end }} + {{ end }} +``` + +Of course, you have to change the email addresses and the auth_password with the one generated from Google previously. + +## Service Setup + +### Alert manager + +Create and open the Alert manager service file: + +```sh filename="create service" copy +sudo tee /etc/systemd/system/alertmanager.service > /dev/null << EOF +[Unit] + Description=AlertManager Server Service + Wants=network-online.target + After=network-online.target + +[Service] + User=alertmanager + Group=alertmanager + Type=simple + ExecStart=/usr/local/bin/alertmanager \ + --config.file /etc/alertmanager/alertmanager.yml \ + --storage.path /var/lib/alertmanager \ + --web.external-url=http://localhost:9093 \ + --cluster.advertise-address='0.0.0.0:9093' + +[Install] +WantedBy=multi-user.target +EOF +``` + +## Starting the Services + +Launch a daemon reload to take the services into account in systemd: + +```sh filename="daemon-reload" copy +sudo systemctl daemon-reload +``` + +Next, we will want to start the alertManager service: + +**alertManager**: + +```sh filename="start service" copy +sudo systemctl start alertmanager.service +``` + +And check that they are working fine: + +**alertManager**:: + +```sh filename="status" copy +sudo systemctl status alertmanager.service +``` + +If everything is working adequately, activate the services! + +**alertManager**: + +```sh filename="enable" copy +sudo systemctl enable alertmanager.service +``` + +Amazing! We have now successfully added alert monitoring for our Tangle node! diff --git a/pages/docs/tangle-network/validator/monitoring/grafana.mdx b/pages/docs/tangle-network/validator/monitoring/grafana.mdx new file mode 100644 index 00000000..916cb9ac --- /dev/null +++ b/pages/docs/tangle-network/validator/monitoring/grafana.mdx @@ -0,0 +1,193 @@ +--- +title: Grafana Dashboard Setup +description: Create visual dashboards for the metrics captured by Prometheus. +--- + +import { Tabs, Tab } from "../../../../../components/Tabs"; +import Callout from "../../../../../components/Callout"; + +# Grafana Setup + +The following is a guide outlining the steps to setup Grafana Dashboard to visualize metric data for a Tangle node. If you do not have Tangle node setup yet, please +review the **Tangle Node Quickstart** setup guide [here](/docs/ecosystem-roles/validator/quickstart/). + +In this guide we will configure the following modules to visualize metric data from a running Tangle node. + +- **Grafana** is the visual dashboard tool that we access from the outside (through SSH tunnel to keep the node secure). + +## What are Grafana Dashboards? + +A dashboard is a set of one or more panels organized and arranged into one or more rows. Grafana ships with a variety of panels making it easy to +construct the right queries, and customize the visualization so that you can create the perfect dashboard for your need. Each panel can interact +with data from any configured Grafana data source. To learn more about Grafana Dashboards, please +visit the official docs site [here](https://grafana.com/docs/grafana/latest/dashboards/). + +### Getting Started + +Let's first start by downloading the latest releases of the above mentioned modules (Grafana). + + + This guide assumes the user has root access to the machine running the Tangle node, and following the below steps inside that machine. As well as, + the user has already configured Prometheus on this machine. + + +**1. Download Grafana** + + + + + ```sh filename="brew" copy + brew update + brew install grafana + ``` + + + + + ```sh filename="linux" copy + sudo apt-get install -y apt-transport-https + sudo apt-get install -y software-properties-common wget + wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add - + ``` + + For other linux distrubutions please visit official release page [here](https://grafana.com/grafana/download?edition=oss&platform=linux). + + + + +**2. Add Grafana repository to APT sources:** + + + This guide assumes the user is installing and configuring Grafana for a linux machine. For Macos instructions + please visit the offical docs [here](https://grafana.com/docs/grafana/v9.0/setup-grafana/installation/mac/). + + +```sh filename="add-apt" copy +sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main" +``` + +**3. Refresh your APT cache to update your package lists:** + +```sh filename="apt update" copy +sudo apt update +``` + +**4. Next, make sure Grafana will be installed from the Grafana repository:** + +```sh filename="apt-cache" copy +apt-cache policy grafana +``` + +The output of the previous command tells you the version of Grafana that you are about to install, and where you will retrieve the package from. Verify that the installation candidate at the top of the list will come from the official Grafana repository at `https://packages.grafana.com/oss/deb`. + +```sh filename="output" +Output of apt-cache policy grafana +grafana: + Installed: (none) + Candidate: 6.3.3 + Version table: + 6.3.3 500 + 500 https://packages.grafana.com/oss/deb stable/main amd64 Packages +... +``` + +**5. You can now proceed with the installation:** + +```sh filename="install grafana" copy +sudo apt install grafana +``` + +**6. Install the Alert manager plugin for Grafana:** + +```sh filename="grafana-cli" copy +sudo grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource +``` + +## Service Setup + +### Grafana + +The Grafana’s service is automatically created during extraction of the deb package, you do not need to create it manually. + +Launch a daemon reload to take the services into account in systemd: + +```sh filename="daemon-reload" copy +sudo systemctl daemon-reload +``` + +**Start the Grafana service:** + +```sh filename="start service" copy +sudo systemctl start grafana-server +``` + +And check that they are working fine, one by one: + +```sh filename="status" copy +systemctl status grafana-server +``` + +If everything is working adequately, activate the services! + +```sh filename="enable" copy +sudo systemctl enable grafana-server +``` + +## Run Grafana dashboard + +Now we are going to setup the dashboard to visiualize the metrics we are capturing. + +From the browser on your local machine, navigate to `http://localhost:3000/login`. You should be greeted with +a login screen. You can login with the default credentials, `admin/admin`. Be sure to update your password afterwards. + + + This guide assumes the user has configured Prometheus, AlertManager, and Loki as a data source. + + +**Next, we need to add Prometheus as a data source.** + +1. Open the Settings menu +2. Select **Data Sources** +3. Select **Add Data Source** +4. Select Prometheus +5. Input the URL field with http://localhost:9090 +6. Click Save & Test + +**Next, we need to add AlertManager as a data source.** + +1. Open the Settings menu +2. Select **Data Sources** +3. Select **Add Data Source** +4. Select AlertManager +5. Input the URL field with http://localhost:9093 +6. Click Save & Test + +**Next, we need to add Loki as a data source.** + +1. Open the Settings menu +2. Select **Data Sources** +3. Select **Add Data Source** +4. Select Loki +5. Input the URL field with http://localhost:3100 +6. Click Save & Test + +We have our data sources connected, now its time to import the dashboard we want to use. You may +create your own or import others, but the purposes of this guide we will use the Polkadot Essentials dashboard created +by bLD nodes! + +**To import a dashboard:** + +1. Select the + button +2. Select **Import** +3. Input the dashboard number, **13840** +4. Select Prometheus and AlertManager as data sources from the dropdown menu +5. Click Load + +**In the dashboard selection, make sure you select:** + +- **Chain Metrics**: substrate +- **Chain Instance Host**: localhost:9615 to point the chain data scrapper +- **Chain Process Name**: the name of your node binary + +Congratulations!! You have now configured Grafana to visualize the metrics we are capturing. You now +have monitoring setup for your node! diff --git a/pages/docs/tangle-network/validator/monitoring/loki.mdx b/pages/docs/tangle-network/validator/monitoring/loki.mdx new file mode 100644 index 00000000..31d92fa6 --- /dev/null +++ b/pages/docs/tangle-network/validator/monitoring/loki.mdx @@ -0,0 +1,334 @@ +--- +title: Loki Log Management +description: A service dedidated to aggregate and query system logs. +--- + +import { Tabs, Tab } from "../../../../../components/Tabs"; +import Callout from "../../../../../components/Callout"; + +# Loki Log Management + +The following is a guide outlining the steps to setup Loki for log management of a Tangle node. If you do not have Tangle node setup yet, please +review the **Tangle Node Quickstart** setup guide [here](/docs/ecosystem-roles/validator/quickstart/). + +In this guide we will configure the following modules to scrape metrics from the running Tangle node. + +- **Loki** provides log aggregation system and metrics. +- **Promtail** is the agent responsible for gathering logs, and sending them to Loki. + +Let's first start by downloading the latest releases of the above mentioned modules (Loki, Promtail download pages). + + + This guide assumes the user has root access to the machine running the Tangle node, and following the below steps inside that machine. + + +**1. Download Loki** + + + + + AMD version: + ```sh filename="AMD" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-darwin-amd64.zip" + ``` + ARM version: + ```sh filename="ARM" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-darwin-arm64.zip" + ``` + + + + + AMD version: + ```sh filename="AMD" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-linux-amd64.zip" + ``` + ARM version: + ```sh filename="ARM" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-linux-arm64.zip" + ``` + + For other linux distrubutions please visit official release page [here](https://github.com/grafana/loki/releases). + + + + + AMD version: + ```sh filename="AMD" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/loki-windows-amd64.exe.zip" + ``` + + + + +**2. Download Promtail** + + + + + AMD version: + ```sh filename="AMD" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-darwin-amd64.zip" + ``` + ARM version: + ```sh filename="ARM" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-darwin-arm64.zip" + ``` + + + + + AMD version: + ```sh filename="AMD" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-linux-amd64.zip" + ``` + ARM version: + ```sh filename="ARM" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-linux-arm64.zip" + ``` + + + + + AMD version: + ```sh filename="AMD" copy + curl -O -L "https://github.com/grafana/loki/releases/download/v2.7.0/promtail-windows-amd64.exe.zip" + ``` + + + + +**3. Extract the Downloaded Files:** + +```sh filename="unzip" copy +unzip "loki-linux-amd64.zip" && +unzip "promtail-linux-amd64.zip" +``` + +**4. Copy the Extracted Files into `/usr/local/bin`:** + +```sh filename="cp" copy +sudo cp loki-linux-amd64 /usr/local/bin/ && +sudo cp promtail-linux-amd64 /usr/local/bin/ +``` + +**5. Create Dedicated Users:** + +Now we want to create dedicated users for each of the modules we have installed: + +```sh filename="useradd" copy +sudo useradd --no-create-home --shell /usr/sbin/nologin loki && +sudo useradd --no-create-home --shell /usr/sbin/nologin promtail +``` + +**6. Create Directories for `loki`, and `promtail`:** + +```sh filename="mkdir" copy +sudo mkdir /etc/loki && +sudo mkdir /etc/promtail +``` + +**7. Change the Ownership for all Directories:** + +We need to give our user permissions to access these directories: + +```sh filename="chown" copy +sudo chown loki:loki /usr/local/bin/loki-linux-amd64 && +sudo chown promtail:promtail /usr/local/bin/promtail-linux-amd64 +``` + +**9. Finally, let's clean up these directories:** + +```sh filename="rm" copy +rm -rf ./loki-linux-amd64* && +rm -rf ./promtail-linux-amd64* +``` + +The next series of steps will be configuring each service. + +## Configuration + +If you are interested to see how we configure the Tangle Network nodes for monitoring check out https://github.com/webb-tools/tangle/tree/main/monitoring. + +### Loki + +Loki's configuration details what ports to listen to, how to store the logs, and other configuration options. +There are many other config options for Loki, and you can read more about Loki configuration at: https://grafana.com/docs/loki/latest/configuration/ + +Let’s create the file: + +```sh filename="nano" copy +sudo touch /etc/loki/config.yml +sudo nano /etc/loki/config.yml +``` + +```yaml filename="config.yaml" copy +auth_enabled: false + +server: + http_listen_port: 3100 + grpc_listen_port: 9096 + +ingester: + lifecycler: + address: 127.0.0.1 + ring: + kvstore: + store: inmemory + replication_factor: 1 + final_sleep: 0s + chunk_idle_period: 5m + chunk_retain_period: 30s + max_transfer_retries: 0 + +schema_config: + configs: + - from: 2020-10-24 + store: boltdb-shipper + object_store: filesystem + schema: v11 + index: + prefix: index_ + period: 168h + + +storage_config: + boltdb: + directory: /data/loki/index + + filesystem: + directory: /data/loki/chunks + +limits_config: + enforce_metric_name: false + reject_old_samples: true + reject_old_samples_max_age: 168h + +chunk_store_config: + max_look_back_period: 0s + +table_manager: + retention_deletes_enabled: false + retention_period: 0 +``` + +### Promtail + +The Promtail configuration details what logs to send to Loki. In the below configuration we are indicating +to send the logs to Loki from the `/var/log/dkg` directory. This directory can be changed based on what logs you +want to pick up. There are many other config options for Promtail, and you can read more about Promtail configuration at: https://grafana.com/docs/loki/latest/clients/promtail/configuration/ + +Let’s create the file: + +```sh filename="nano" copy +sudo touch /etc/promtail/config.yml +sudo nano /etc/promtail/config.yml +``` + +```yaml filename="config.yaml" copy +server: + http_listen_port: 9080 + grpc_listen_port: 0 + +positions: + filename: /data/loki/positions.yaml + +clients: + - url: http://localhost:3100/loki/api/v1/push + +scrape_configs: +- job_name: system + static_configs: + - targets: + - localhost + labels: + job: varlogs + __path__: /var/log/dkg/*log +``` + +## Service Setup + +### Loki + +Create and open the Loki service file: + +```sh filename="loki.service" copy +sudo tee /etc/systemd/system/loki.service > /dev/null << EOF +[Unit] + Description=Loki Service + Wants=network-online.target + After=network-online.target + +[Service] + User=loki + Group=loki + Type=simple + ExecStart=/usr/local/bin/loki-linux-amd64 -config.file /etc/loki/config.yml + +[Install] +WantedBy=multi-user.target +EOF +``` + +### Promtail + +Create and open the Promtail service file: + +```sh filename="promtail.service" copy +sudo tee /etc/systemd/system/promtail.service > /dev/null << EOF +[Unit] + Description=Promtail Service + Wants=network-online.target + After=network-online.target + +[Service] + User=promtail + Group=promtail + Type=simple + ExecStart=/usr/local/bin/promtail-linux-amd64 -config.file /etc/promtail/config.yml + +[Install] +WantedBy=multi-user.target +EOF +``` + +Great! You have now configured all the services needed to run Loki. + +## Starting the Services + +Launch a daemon reload to take the services into account in systemd: + +```sh filename="daemon-reload" copy +sudo systemctl daemon-reload +``` + +Next, we will want to start each service: + +```sh filename="start service" copy +sudo systemctl start loki.service && +sudo systemctl start promtail.service +``` + +And check that they are working fine, one by one: + +**loki**: + +```sh filename="status" copy +systemctl status loki.service +``` + +**promtail**: + +```sh filename="status" copy +systemctl status promtail.service +``` + +If everything is working adequately, activate the services! + +```sh filename="enable" copy +sudo systemctl enable loki.service && +sudo systemctl enable promtail.service +``` + +Amazing! You have now successfully configured Loki for log management. Check out the Grafana +documentation to create a Loki log dashboard! diff --git a/pages/docs/tangle-network/validator/monitoring/prometheus.mdx b/pages/docs/tangle-network/validator/monitoring/prometheus.mdx new file mode 100644 index 00000000..bbcb3f74 --- /dev/null +++ b/pages/docs/tangle-network/validator/monitoring/prometheus.mdx @@ -0,0 +1,435 @@ +--- +title: Prometheus Setup +description: Setup Prometheus for scraping node metrics and more. +--- + +import { Tabs, Tab } from "../../../../../components/Tabs"; +import Callout from "../../../../../components/Callout"; + +# Prometheus Setup + +The following is a guide outlining the steps to setup Prometheus to monitor a Tangle node. If you do not have Tangle node setup yet, please +review the **Tangle Node Quickstart** setup guide [here](/docs/ecosystem-roles/validator/quickstart/). It is important to note that +this guide's purpose is to help you get started with monitoring your Tangle node, not to advise on how to setup a node securely. Please +take additional security and privacy measures into consideration. + +In this guide we will configure the following modules to scrape metrics from the running Tangle node. + +- **Prometheus** is the central module; it pulls metrics from different sources to provide them to the Grafana dashboard and Alert Manager. +- **Node exporter** provides hardware metrics of the dashboard. +- **Process exporter** provides processes metrics for the dashboard (optional). + +## What is Prometheus? + +Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, +many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. +It is now a standalone open source project and maintained independently of any company. To learn more about Prometheus, please +visit the official docs site [here](https://prometheus.io/docs/introduction/overview/). + +### Getting Started + +Let's first start by downloading the latest releases of the above mentioned modules (Prometheus, Process exporter, and Node exporter). + + + This guide assumes the user has root access to the machine running the Tangle node, and following the below steps inside that machine. + + +**1. Download Prometheus** + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.darwin-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.darwin-arm64.tar.gz + ``` + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.linux-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.linux-arm64.tar.gz + ``` + + For other linux distrubutions please visit official release page [here](https://github.com/prometheus/prometheus/releases). + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.windows-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/prometheus/releases/download/v2.40.3/prometheus-2.40.3.windows-arm64.tar.gz + ``` + + + + +**2. Download Node Exporter** + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.darwin-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.darwin-arm64.tar.gz + ``` + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.linux-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/prometheus/node_exporter/releases/download/v1.40.0/node_exporter-1.4.0.linux-arm64.tar.gz + ``` + + For other linux distrubutions please visit official release page [here](https://github.com/prometheus/node_exporter/releases). + + + + +**3. Download Process Exporter** + + + + + AMD version: + ```sh filename="AMD" copy + wget https://github.com/ncabatoff/process-exporter/releases/download/v0.7.10/process-exporter-0.7.10.linux-amd64.tar.gz + ``` + ARM version: + ```sh filename="ARM" copy + wget https://github.com/ncabatoff/process-exporter/releases/download/v0.7.10/process-exporter-0.7.10.linux-arm64.tar.gz + ``` + + For other linux distrubutions please visit official release page [here](https://github.com/ncabatoff/process-exporter/releases). + + + + +**4. Extract the Downloaded Files:** + +Run the following command: + +```sh filename="tar" copy +tar xvf prometheus-*.tar.gz && +tar xvf node_exporter-*.tar.gz && +tar xvf process-exporter-*.tar.gz +``` + +**5. Copy the Extracted Files into `/usr/local/bin`:** + + + **Note:** The example below makes use of the `linux-amd64` installations, please update to make use of the target system you have installed. + + +We are first going to copy the `prometheus` binary: + +```sh filename="cp" copy +sudo cp ./prometheus-*.linux-amd64/prometheus /usr/local/bin/ +``` + +Next, we are going to copy over the `prometheus` console libraries: + +```sh filename="cp" copy +sudo cp -r ./prometheus-*.linux-amd64/consoles /etc/prometheus && +sudo cp -r ./prometheus-*.linux-amd64/console_libraries /etc/prometheus +``` + +We are going to do the same with `node-exporter` and `process-exporter`: + +```sh filename="cp" copy +sudo cp ./node_exporter-*.linux-amd64/node_exporter /usr/local/bin/ && +sudo cp ./process-exporter-*.linux-amd64/process-exporter /usr/local/bin/ +``` + +**6. Create Dedicated Users:** + +Now we want to create dedicated users for each of the modules we have installed: + +```sh filename="useradd" copy +sudo useradd --no-create-home --shell /usr/sbin/nologin prometheus && +sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter && +sudo useradd --no-create-home --shell /usr/sbin/nologin process-exporter +``` + +**7. Create Directories for `Prometheus`, and `Process exporter`:** + +```sh filename="mkdir" copy +sudo mkdir /var/lib/prometheus && +sudo mkdir /etc/process-exporter +``` + +**8. Change the Ownership for all Directories:** + +We need to give our user permissions to access these directories: + +**prometheus**: + +```sh filename="chown" copy +sudo chown prometheus:prometheus /etc/prometheus/ -R && +sudo chown prometheus:prometheus /var/lib/prometheus/ -R && +sudo chown prometheus:prometheus /usr/local/bin/prometheus +``` + +**node_exporter**: + +```sh filename="chwon" copy +sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter +``` + +**process-exporter**: + +```sh filename="chown" copy +sudo chown process-exporter:process-exporter /etc/process-exporter -R && +sudo chown process-exporter:process-exporter /usr/local/bin/process-exporter +``` + +**9. Finally, let's clean up these directories:** + +```sh filename="rm" copy +rm -rf ./prometheus* && +rm -rf ./node_exporter* && +rm -rf ./process-exporter* +``` + +Great! You have now installed and setup your environment. The next series of steps will be configuring each service. + +## Configuration + +If you are interested to see how we configure the Tangle Network nodes for monitoring check out https://github.com/webb-tools/tangle/tree/main/monitoring. + +### Prometheus + +Let’s edit the Prometheus config file and add all the modules in it: + +```sh filename="nano" copy +sudo nano /etc/prometheus/prometheus.yml +``` + +Add the following code to the file and save: + +```yaml filename="promtheus.yml" copy +global: + scrape_interval: 15s + evaluation_interval: 15s + +rule_files: + - 'rules.yml' + +alerting: + alertmanagers: + - static_configs: + - targets: + - localhost:9093 + +scrape_configs: + - job_name: "prometheus" + scrape_interval: 5s + static_configs: + - targets: ["localhost:9090"] + - job_name: "substrate_node" + scrape_interval: 5s + static_configs: + - targets: ["localhost:9615"] + - job_name: "node_exporter" + scrape_interval: 5s + static_configs: + - targets: ["localhost:9100"] + - job_name: "process-exporter" + scrape_interval: 5s + static_configs: + - targets: ["localhost:9256"] +``` + +- **scrape_interval** defines how often Prometheus scrapes targets, while evaluation_interval controls how often the software will evaluate rules. +- **rule_files** set the location of Alert manager rules we will add next. +- **alerting** contains the alert manager target. +- **scrape_configs** contain the services Prometheus will monitor. + +You can notice the first scrap where Prometheus monitors itself. + +### Process exporter + +Process exporter needs a config file to be told which processes they should take into account: + +```sh filename="nano" copy +sudo touch /etc/process-exporter/config.yml +sudo nano /etc/process-exporter/config.yml +``` + +Add the following code to the file and save: + +```sh filename="config.yml" copy +process_names: + - name: "{{.Comm}}" + cmdline: + - '.+' +``` + +## Service Setup + +### Prometheus + +Create and open the Prometheus service file: + +```sh filename="promtheus.service" copy +sudo tee /etc/systemd/system/prometheus.service > /dev/null << EOF +[Unit] + Description=Prometheus Monitoring + Wants=network-online.target + After=network-online.target + +[Service] + User=prometheus + Group=prometheus + Type=simple + ExecStart=/usr/local/bin/prometheus \ + --config.file /etc/prometheus/prometheus.yml \ + --storage.tsdb.path /var/lib/prometheus/ \ + --web.console.templates=/etc/prometheus/consoles \ + --web.console.libraries=/etc/prometheus/console_libraries + ExecReload=/bin/kill -HUP $MAINPID + +[Install] + WantedBy=multi-user.target +EOF +``` + +### Node exporter + +Create and open the Node exporter service file: + +```sh filename="node_exporter.service" copy +sudo tee /etc/systemd/system/node_exporter.service > /dev/null << EOF +[Unit] + Description=Node Exporter + Wants=network-online.target + After=network-online.target + +[Service] + User=node_exporter + Group=node_exporter + Type=simple + ExecStart=/usr/local/bin/node_exporter + +[Install] + WantedBy=multi-user.target +EOF +``` + +### Process exporter + +Create and open the Process exporter service file: + +```sh filename="process-exporter.service" copy +sudo tee /etc/systemd/system/process-exporter.service > /dev/null << EOF +[Unit] + Description=Process Exporter + Wants=network-online.target + After=network-online.target + +[Service] + User=process-exporter + Group=process-exporter + Type=simple + ExecStart=/usr/local/bin/process-exporter \ + --config.path /etc/process-exporter/config.yml + +[Install] +WantedBy=multi-user.target +EOF +``` + +## Starting the Services + +Launch a daemon reload to take the services into account in systemd: + +```sh filename="deamon-reload" copy +sudo systemctl daemon-reload +``` + +Next, we will want to start each service: + +**prometheus**: + +```sh filename="start serive" copy +sudo systemctl start prometheus.service +``` + +**node_exporter**: + +```sh filename="start serive" copy +sudo systemctl start node_exporter.service +``` + +**process-exporter**: + +```sh filename="start serive" copy +sudo systemctl start process-exporter.service +``` + +And check that they are working fine: + +**prometheus**: + +```sh filename="status" copy +systemctl status prometheus.service +``` + +**node_exporter**: + +```sh filename="status" copy +systemctl status node_exporter.service +``` + +**process-exporter**: + +```sh filename="status" copy +systemctl status process-exporter.service +``` + +If everything is working adequately, activate the services! + +**prometheus**: + +```sh filename="enable" copy +sudo systemctl enable prometheus.service +``` + +**node_exporter**: + +```sh filename="enable" copy +sudo systemctl enable node_exporter.service +``` + +**process-exporter**: + +```sh filename="enable" copy +sudo systemctl enable process-exporter.service +``` + +Amazing! We have now completely setup our Prometheus monitoring and are scraping metrics from our +running Tangle node. + +You can view those metrics on the Prometheus dashboard by going to `http://localhost:9090/metrics` ! diff --git a/pages/docs/tangle-network/validator/monitoring/quickstart.mdx b/pages/docs/tangle-network/validator/monitoring/quickstart.mdx new file mode 100644 index 00000000..a39eae5b --- /dev/null +++ b/pages/docs/tangle-network/validator/monitoring/quickstart.mdx @@ -0,0 +1,59 @@ +--- +title: Quickstart +description: Creating monitoring stack for Tangle node. +--- + +import { Tabs, Tab } from "../../../../../components/Tabs"; +import Callout from "../../../../../components/Callout"; + +# Monitoring Tangle Node + +The following is a guide outlining the steps to setup monitoring for an Tangle node. If you do not have Tangle node setup yet, please +review the **How to run an Tangle node** setup guide [here](https://docs.webb.tools/v1/node-operators/run-tangle-node). It is important to note that +this guide's purpose is to help you get started with monitoring your Tangle node, not to advise on how to setup a node securely. Please +take additional security and privacy measures into consideration. + +Here is how our final configuration will look like at the end of this guide. + +- **Prometheus** is the central module; it pulls metrics from different sources to provide them to the Grafana dashboard and Alert Manager. +- **Grafana** is the visual dashboard tool that we access from the outside (through SSH tunnel to keep the node secure). +- **Alert Manager** listens to Prometheus metrics and pushes an alert as soon as a threshold is crossed (CPU % usage for example). +- **Tangle Node** natively provides metrics for monitoring. +- **Process exporter** provides processes metrics for the dashboard (optional). +- **Loki** provides log aggregation system and metrics. +- **Promtail** is the agent responsible for gathering logs, and sending them to Loki. + + + Running the monitoring stack requires that you are already running the tangle network node with at least the following ports exports: + - Prometheus : `https://localhost:9615` + + +## Docker usage + +The quickest way to setup monitoring for your node is to use our provided `docker-compose` file. The docker image starts all the above monitoring +tools with the exception of `Node exporter`. `node-exporter` is ommitted since some metrics are not available when running inside a docker container. + +Follow the instructions [here](/prometheus) to start the prometheus node exporter. + +### Prerequisites + +Before starting the monitoring stack, ensure the configs are setup correctly, + +- (Optional) Set the `__SLACK_WEBHOOK_URL__` in `alertmanager.yml` to receive slack alerts +- Ensure the promtail mount path matches your log directory + +Note : All containers require connection to the localhost, this behaviour is different in Linux/Windows/Mac, the configs within the `docker-compose` and yml +files assume a linux environment. Refer [this](https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach) to make necessary adjustments for your environment. + +### Usage + +**To start the monitoring stack, run:** + +```sh filename="compose up" copy +cd monitoring +docker compose up -d +``` + +You can then navigate to `http://localhost:3000` to access the Grafana dashboard! + +![Tangle Dashboard](../../../../../components/images/tangle-metrics.png) diff --git a/pages/docs/tangle-network/validator/quickstart.mdx b/pages/docs/tangle-network/validator/quickstart.mdx new file mode 100644 index 00000000..0fc80041 --- /dev/null +++ b/pages/docs/tangle-network/validator/quickstart.mdx @@ -0,0 +1,43 @@ +--- +title: Node Operator Quickstart +description: Participate in the Webb ecosystem by deploying a Tangle node, to validate, serve data or more. +--- + +import { QuickDeployArea, DeployArea, SupportArea, MonitoringArea } from "../../../../components/TangleQuickstart" +import { RepoArea } from "../../../../components/RepoArea"; +import FullWebbCTA from "../../../../components/FullWebbCTA"; + +# Node Operator Quickstart + +Becoming a node operator on the Tangle Network requires some technical skills, trust, and support from the community. Below +is a collection of quick links for quick setups! + +**If you're looking to understand how to become a Validator in Substrate systems like Tangle, see the [Polkadot Docs](https://wiki.polkadot.network/docs/maintain-guides-how-to-validate-polkadot) as well.** + +## Quick Setup + + + +## Advanced Setup + + + +## Monitoring + +Monitoring and troubleshooting your Tangle node is essential, and we provide setup instructions to make it incredibly easy to get started! + + + +## Support Channels + +Run into weird issues? Or have questions about the Tangle Network? Join the Webb community and become connected to the entire Webb ecosystem. + + + +## Repositories + +Interested in what we are building at Webb? Clone the below repositories, and start contributing to a private cross-chain future! + + + + diff --git a/pages/docs/tangle-network/validator/required-keys.mdx b/pages/docs/tangle-network/validator/required-keys.mdx new file mode 100644 index 00000000..8abb1604 --- /dev/null +++ b/pages/docs/tangle-network/validator/required-keys.mdx @@ -0,0 +1,141 @@ +--- +title: Required Keys +description: Describes the keys necessary to start and run a Tangle node. +--- + +import Callout from "../../../../components/Callout"; + + + This guide assumes you have a validator already running, refer [Running With Docker](./deploy-with-docker/validator-node.mdx) or [Running with systemd](./systemd/validator-node.mdx) to ensure your node is setup correctly + + +# Required Keys + +In order to participate in the distributed key generation protocol, block production, and block finalization, you will be required to set up a few keys. These keys +include: + +- DKG key (Ecdsa) +- Aura key (Sr25519) +- Account key (Sr25519) +- Grandpa key (Ed25519) +- ImOnline key (Sr25519) + +To generate each of the above keys we will make use of [subkey](https://docs.substrate.io/reference/command-line-tools/subkey/). You will need to install +subkey before running the command. + + + Keep in mind the below commands are using `/tangle-data` base-path, please specify your preferred base-path during execution. + + +**Once installed, to generate the DKG key you can run the following:** + +```sh filename="DKG Key" copy +tangle-standalone key insert --base-path /tangle-data \ +--chain "" \ +--scheme Ecdsa \ +--suri "<12-PHRASE-MNEMONIC>" \ +--key-type wdkg +``` + +**To generate the Aura key you can run the following:** + +```sh filename="Aura Key" copy +tangle-standalone key insert --base-path /tangle-data \ +--chain "" \ +--scheme Sr25519 \ +--suri "<12-PHRASE-MNEMONIC>" \ +--key-type aura +``` + +**To generate the Account key you can run the following:** + +```sh filename="Account Key" copy +tangle-standalone key insert --base-path /tangle-data \ +--chain "" \ +--scheme Sr25519 \ +--suri "<12-PHRASE-MNEMONIC>" \ +--key-type acco +``` + +**To generate the Imonline key you can run the following:** + +```sh filename="Imonline Key" copy +tangle-standalone key insert --base-path /tangle-data \ +--chain "" \ +--scheme Sr25519 \ +--suri "<12-PHRASE-MNEMONIC>" \ +--key-type imon +``` + +### Synchronize Chain Data + +You can begin syncing your node by running the following command: + +```sh filename="Syncing node" copy +./target/release/tangle-parachain +``` + +Once your node has fully syncronized with the Relay Chain you may proceed to setup the +necessary accounts to operate a node. + +## Bond funds + +To start collating, you need to have x TNT tokens for Tangle Network. It is highly recommended that you make your controller +and stash accounts be two separate accounts. For this, you will create two accounts and make sure each of them have at least +enough funds to pay the fees for making transactions. Keep most of your funds in the stash account since it is meant to be +the custodian of your staking funds. + +Make sure not to bond all your TNT balance since you will be unable to pay transaction fees from your bonded balance. + +It is now time to set up our validator. We will do the following: + +- Bond the TNT of the Stash account. These TNT tokens will be put at stake for the security of the network and can be slashed. +- Select the Controller. This is the account that will decide when to start or stop validating. + +First, go to the Staking section. Click on "Account Actions", and then the "+ Stash" button. It should look something +similar to the below image. + +![bond](../../../../components/images/bond.png) + +Once everything is filled in properly, click Bond and sign the transaction with your Stash account. + +## Session Keys + +Operators need to set their `Author` session keys. Run the following command to author session keys. +**Note:** You may need to change `http://localhost:9933` to your correct address. + +```sh filename="Generate session key" copy +curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys", "params":[]}' http://localhost:9933 +``` + +Result will look like this, copy the key: + +``` +{"jsonrpc":"2.0","result":"0x400e3cef43bdessab331e4g03115c4bcecws3cxff608fa3b8sh6b07y369386570","id":1} +``` + +### Set session keys + +1. Go to the Polkadot.js portal: `Developer > Extrinsic`. +2. Select your account and extrinsic type: session / setKeys. +3. Enter the session keys and set proof to `0x00`. +4. Submit the transaction. + +### Setting identity + +Operators need to set their identity. + +1. Go to the Polkadot.js portal: `Accounts` +2. Open the 3 dots next to your address: `Set on-chain Identity` +3. Enter all fields you want to set. +4. Send the transaction. + +### Request judgment + +1. Go to the Polkadot.js portal: `Developer > Extrinsic` +2. Select your account and extrinsic type: `identity / requestJudgment` +3. Send the transaction. + +### Production blocks + +Once your is active, you will see your name inside Network tab every time you produce a block! diff --git a/pages/docs/tangle-network/validator/requirements.mdx b/pages/docs/tangle-network/validator/requirements.mdx new file mode 100644 index 00000000..0650ca60 --- /dev/null +++ b/pages/docs/tangle-network/validator/requirements.mdx @@ -0,0 +1,189 @@ +--- +title: Requirements +description: An overview of Webb Tangle node requirements. +--- + +import { Tabs, Tab } from "../../../../components/Tabs"; + +# Requirements + +The current Tangle testnet is a standalone network, meaning that it is not connected to the Polkadot or Kusama relay chain. +Since the Tangle is not a parachain, the size of nodes are quite a small build as it only contains code to run the standalone Tangle network and not syncing +the relay chain or communicate between the two. As such, the build is smaller, and does not require the same minumum spec requirements as a parachain node. + +The following specifications are the ideal or recommended, but nodes can be run with less. Testnet nodes have also been run using AWS t3.Large instances. + +| Component | Requirements | +| --------- | ------------------------------------------------------------------------------------------------------ | +| CPU | Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz | +| Storage | An NVMe solid state drive of 500 GB (As it should be reasonably sized to deal with blockchain growth). | +| Memory | 32GB ECC | +| Firewall | P2P port must be open to incoming traffic:
- Source: Any
- Destination: 30333, 30334 TCP | + +## Running Ports + +As stated before, the standalone nodes will listen on multiple ports. The default Substrate ports are used in the standalone, +while the relay chain will listen on the next higher port. + +The only ports that need to be open for incoming traffic are those designated for P2P. + +**Default Ports for a Tangle Full-Node:** + +| Description | Port | +| ----------- | ----------- | +| P2P | 30333 (TCP) | +| RPC | 9933 | +| WS | 9944 | +| Prometheus | 9615 | + +## Dependencies + +In order to build a Tangle node from source your machine must have specific dependecies installed. This guide +outlines those requirements. + +This guide uses [https://rustup.rs](https://rustup.rs) installer and the `rustup` tool to manage the Rust toolchain. Rust is required to +compile a Tangle node. + +First install and configure `rustup`: + +```sh filename="Install Rust" copy +# Install +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + +# Configure +source ~/.cargo/env +``` + +Configure the Rust toolchain to default to the latest stable version, add nightly and the nightly wasm target: + +```sh filename="Configure Rust" copy +rustup default nightly +rustup update +rustup update nightly +rustup target add wasm32-unknown-unknown --toolchain nightly +``` + +Great! Now your Rust environment is ready! 🚀🚀 + +### Substrate Dependencies + + + + + Debian version: + ```sh filename=" Debian" copy + sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler + ``` + Arch version: + ```sh filename="Arch" copy + pacman -Syu --needed --noconfirm curl git clang make protobuf + ``` + Fedora version: + ```sh filename="Fedora" copy + sudo dnf update + sudo dnf install clang curl git openssl-devel make protobuf-compiler + ``` + Opensuse version: + ```sh filename="Opensuse" copy + sudo zypper install clang curl git openssl-devel llvm-devel libudev-devel make protobuf + ``` + + Remember that different distributions might use different package managers and bundle packages in different ways. + For example, depending on your installation selections, Ubuntu Desktop and Ubuntu Server might have different packages + and different requirements. However, the packages listed in the command-line examples are applicable for many common Linux + distributions, including Debian, Linux Mint, MX Linux, and Elementary OS. + + + + + Assumes user has Homebrew already installed. + + ```sh filename="Brew" copy + brew update + brew install openssl gmp protobuf cmake + ``` + + + + + For Windows users please refer to the official Substrate documentation: + [Windows](https://docs.substrate.io/install/windows/) + + + + +### Build from Source 💻 + +Once the development environment is set up, you can build the Tangle node from source. + +```sh filename="Clone repo" copy +git clone https://github.com/webb-tools/tangle.git +``` + +```sh filename="Build" copy +cargo build --release +``` + +> NOTE: You _must_ use the release builds! The optimizations here are required +> as in debug mode, it is expected that nodes are not able to run fast enough to produce blocks. + +You will now have the `tangle-standalone` binary built in `target/release/` dir + +#### Feature Flags + +Some features of tangle node are setup behind feature flags, to enable these features you will have to build the binary with these flags enabled + +1. **txpool** + +This feature flag is useful to help trace and debug evm transactions on the chain, you should build node with this flag if you intend to use the node for any evm transaction following + +```sh filename="Build txpool" copy +cargo build --release --features txpool +``` + +2. **relayer** + +This feature flag is used to start the embedded tx relayer with tangle node, you should build node with this flag if you intend to run a node with a relayer which can be used for transaction relaying or data querying + +```sh filename="Build relayer" copy +cargo build --release --features relayer +``` + +3. **light-client** + +This feature flag is used to start the embedded light client with tangle node, you should build node with this flag if you intend to run a node with a light client relayer to sync EVM data on Tangle + +```sh filename="Build light" copy +cargo build --release --features light-client +``` + +### Use Precompiled binary 💻 + +Every release of tangle node includes a Precompiled binary, its currently limited to amd-64 architecture but we plan to +support more soon. You can view all releases [here](https://github.com/webb-tools/tangle/releases). + +In the below commands, substiture `LATEST_RELEASE` with the version you want to use, the current latest version is `0.4.6` + +### Get tangle binary + +```sh filename="Get binary" copy +wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-linux-amd64 +``` + +### Get tangle binary with txpool feature + +```sh filename="Get binary txpool" copy +wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-txpool-linux-amd64 +``` + +### Get tangle binary with relayer feature + +```sh filename="Get binary relayer" copy +wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-relayer-linux-amd64 +``` + +### Get tangle binary with light-client feature + +```sh filename="Get binary light" copy +wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-light-client-linux-amd64 +``` diff --git a/pages/docs/tangle-network/validator/systemd/_meta.json b/pages/docs/tangle-network/validator/systemd/_meta.json new file mode 100644 index 00000000..a3cff2b9 --- /dev/null +++ b/pages/docs/tangle-network/validator/systemd/_meta.json @@ -0,0 +1,4 @@ +{ + "full-node": "Full Node", + "validator-node": "Validator Node" +} diff --git a/pages/docs/tangle-network/validator/systemd/full-node.mdx b/pages/docs/tangle-network/validator/systemd/full-node.mdx new file mode 100644 index 00000000..745890f3 --- /dev/null +++ b/pages/docs/tangle-network/validator/systemd/full-node.mdx @@ -0,0 +1,110 @@ +--- +title: Running with Systemd +description: Run a Tangle full node using systemd. +--- + +# Running with Systemd + +You can run your full node as a systemd process so that it will automatically restart on server reboots +or crashes (and helps to avoid getting slashed!). + +Before following this guide you should have already set up your machines environment, installed the dependencies, and +compiled the Tangle binary. If you have not done so, please refer to the [Requirements](https://docs.webb.tools/docs/ecosystem-roles/validator/requirements/) page. + +## System service setup + +Run the following commands to create the service configuration file: + +```sh filename="mv" copy +# Move the tangle-standalone binary to the bin directory (assumes you are in repo root directory) +sudo mv ./target/release/tangle-standalone /usr/bin/ +``` + +Add the following contents to the service configuration file. Make sure to replace the **USERNAME** with the username you created in the previous step, add your own node name, and update +any paths or ports to your own preference. + +**Note:** The below configuration assumes you are targeting the Tangle Network chainspec. + +**Full Node** + +```sh filename="full.service" copy +sudo tee /etc/systemd/system/full.service > /dev/null << EOF +[Unit] +Description=Tangle Full Node +After=network-online.target +StartLimitIntervalSec=0 + +[Service] +User= +Restart=always +RestartSec=3 +ExecStart=/usr/bin/tangle-standalone \ + --base-path /data/full-node \ + --name \ + --chain tangle-testnet \ + --node-key-file "/home//node-key" \ + --rpc-cors all \ + --port 9946 \ + --no-mdns \ + --telemetry-url "wss://telemetry.polkadot.io/submit/ 0" --name + +[Install] +WantedBy=multi-user.target +EOF +``` + +**Full Node with evm trace** + +**Note:** To run with evm trace, you should use a binary built with `txpool` flag, refer [requirements](../requirements.mdx) page for more details. + +```sh filename="full.service" copy +sudo tee /etc/systemd/system/full.service > /dev/null << EOF +[Unit] +Description=Tangle Full Node +After=network-online.target +StartLimitIntervalSec=0 + +[Service] +User= +Restart=always +RestartSec=3 +ExecStart=/usr/bin/tangle-standalone \ + --base-path /data/full-node \ + --name \ + --chain tangle-testnet \ + --node-key-file "/home//node-key" \ + --rpc-cors all \ + --port 9946 \ + --no-mdns --ethapi trace,debug,txpool + +[Install] +WantedBy=multi-user.target +EOF +``` + +### Enable the services + +Double check that the config has been written to `/etc/systemd/system/full.service` correctly. +If so, enable the service so it runs on startup, and then try to start it now: + +```sh filename="enable service" copy +sudo systemctl daemon-reload +sudo systemctl enable full +sudo systemctl start full +``` + +Check the status of the service: + +```sh filename="status" copy +sudo systemctl status full +``` + +You should see the node connecting to the network and syncing the latest blocks. +If you need to tail the latest output, you can use: + +```sh filename="logs" copy +sudo journalctl -u full.service -f +``` + +Congratulations! You have officially setup a Tangle Network node using Systemd. If you are interested +in learning how to setup monitoring for your node, please refer to the [monitoring](../monitoring/quickstart.mdx) page. diff --git a/pages/docs/tangle-network/validator/systemd/quick-node.mdx b/pages/docs/tangle-network/validator/systemd/quick-node.mdx new file mode 100644 index 00000000..d6088d24 --- /dev/null +++ b/pages/docs/tangle-network/validator/systemd/quick-node.mdx @@ -0,0 +1,90 @@ +--- +title: Quickstart +description: Run a Tangle Validator node using systemd. +--- + +# Tangle Validator Quickstart + +**Caution:** The following guide is only meant as a quickstart for anyone looking to run a tangle node with minimal +config, this guide uses automated keys and it is not recommended to run a validator using this setup long term, refer to [advanced](/docs/ecosystem-roles/validator/systemd/validator-node/) guide +for a more secure long term setup. + +Before following this guide you should have already set up your machines environment, installed the dependencies, and +compiled the Tangle binary. If you have not done so, please refer to the [Requirements](/docs/ecosystem-roles/validator/requirements/) page. + +## Standalone Testnet + +### 1. Fetch the tangle binary + +Use the latest release version in the url in place of ``, you can visit [releases](https://github.com/webb-tools/tangle/releases) page to view the latest info + +``` +wget https://github.com/webb-tools/tangle/releases/download//tangle-standalone-linux-amd64 +``` + +For example, at the time of writing this document, the latest release is v0.4.7 and the link would be as follows + +``` +wget https://github.com/webb-tools/tangle/releases/download/v0.4.7/tangle-standalone-linux-amd64 +``` + +### 2. Start the node binary + +To start the binary you can run the following command (ensure you are in the same folder where tangle-standalone is downloaded) + +Make sure to change the following params before executing the command + +1. `` : This is the path where your chain DB will live +2. `` : This is a unique node name for your node, use a unique name here to help identity your node to other validators and telemetry data + +``` +./tangle-standalone-linux-amd64 \ + --base-path \ + --name \ + --chain tangle-testnet \ + --port 9944 \ + --validator \ + --auto-insert-keys \ + --telemetry-url "wss://telemetry.polkadot.io/submit/ 0" --name +``` + +If the node is running correctly, you should see an output similar to below: + +``` +2023-03-22 14:55:51 Tangle Standalone Node +2023-03-22 14:55:51 ✌️ version 0.1.15-54624e3-aarch64-macos +2023-03-22 14:55:51 ❤️ by Webb Technologies Inc., 2017-2023 +2023-03-22 14:55:51 📋 Chain specification: Tangle Testnet +2023-03-22 14:55:51 🏷 Node name: cooing-morning-2891 +2023-03-22 14:55:51 👤 Role: FULL +2023-03-22 14:55:51 💾 Database: RocksDb at /Users/local/Library/Application Support/tangle-standalone/chains/local_testnet/db/full +2023-03-22 14:55:51 ⛓ Native runtime: tangle-standalone-115 (tangle-standalone-1.tx1.au1) +2023-03-22 14:55:51 Bn254 x5 w3 params +2023-03-22 14:55:51 [0] 💸 generated 5 npos voters, 5 from validators and 0 nominators +2023-03-22 14:55:51 [0] 💸 generated 5 npos targets +2023-03-22 14:55:51 [0] 💸 generated 5 npos voters, 5 from validators and 0 nominators +2023-03-22 14:55:51 [0] 💸 generated 5 npos targets +2023-03-22 14:55:51 [0] 💸 new validator set of size 5 has been processed for era 1 +2023-03-22 14:55:52 🔨 Initializing Genesis block/state (state: 0xfd16…aefd, header-hash: 0x7c05…a27d) +2023-03-22 14:55:52 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. +2023-03-22 14:55:53 Using default protocol ID "sup" because none is configured in the chain specs +2023-03-22 14:55:53 🏷 Local node identity is: 12D3KooWDaeXbqokqvEMqpJsKBvjt9BUz41uP9tzRkYuky1Wat7Z +2023-03-22 14:55:53 💻 Operating system: macos +2023-03-22 14:55:53 💻 CPU architecture: aarch64 +2023-03-22 14:55:53 📦 Highest known block at #0 +2023-03-22 14:55:53 〽️ Prometheus exporter started at 127.0.0.1:9615 +2023-03-22 14:55:53 Running JSON-RPC HTTP server: addr=127.0.0.1:9933, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] +2023-03-22 14:55:53 Running JSON-RPC WS server: addr=127.0.0.1:9944, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] +2023-03-22 14:55:53 discovered: 12D3KooWMr4L3Dun4BUyp23HZtLfxoQjR56dDp9eH42Va5X6Hfgi /ip4/192.168.0.125/tcp/30304 +2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.0.125/tcp/30305 +2023-03-22 14:55:53 discovered: 12D3KooWMr4L3Dun4BUyp23HZtLfxoQjR56dDp9eH42Va5X6Hfgi /ip4/192.168.88.12/tcp/30304 +2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.88.12/tcp/30305 +``` + +**Note** : Since the `--auto-insert-keys` flag was used the logs will print out the keys automatically generated for you, +make sure to note down and keep this safely, in case you need to migrate or restart your node, these keys are essential. + +Congratulations! You have officially setup an Tangle Network node. The quickstart is only meant as a quickstart for anyone looking to run a tangle node with minimal +config, this guide uses automated keys and it is not recommended to run a validator using this setup long term, refer to [advanced](/docs/ecosystem-roles/validator/systemd/validator-node/) guide +for a more secure long term setup.. If you are interested +in learning how to setup monitoring for your node, please refer to the [monitoring](../monitoring/quickstart.mdx) page. diff --git a/pages/docs/tangle-network/validator/systemd/validator-node.mdx b/pages/docs/tangle-network/validator/systemd/validator-node.mdx new file mode 100644 index 00000000..4cbd66f3 --- /dev/null +++ b/pages/docs/tangle-network/validator/systemd/validator-node.mdx @@ -0,0 +1,216 @@ +--- +title: Running with Systemd +description: Run a Tangle Validator node using systemd. +--- + +# Running with Systemd + +You can run your validator node as a Systemd process so that it will automatically restart on server reboots +or crashes (and helps to avoid getting slashed!). + +Before following this guide you should have already set up your machines environment, installed the dependencies, and +compiled the Tangle binary. If you have not done so, please refer to the [Requirements](/docs/ecosystem-roles/validator/requirements/) page. + +## Standalone Testnet + +### Generate and store keys + +We need to generate the required keys for our node. For more information on these keys, please see the [Required Keys](/docs/ecosystem-roles/validator/required-keys/) section. +The keys we need to generate include the following: + +- DKG key (Ecdsa) +- Aura key (Sr25519) +- Account key (Sr25519) +- Grandpa key (Ed25519) +- ImOnline key (Sr25519) + +Let's now insert our required secret keys, we will not pass the SURI in the command, instead it will be interactive, where you +should paste your SURI when the command asks for it. + +**Account Keys** + +```sh filename="Acco" copy +# it will ask for your suri, enter it. +./target/release/tangle-standalone key insert --base-path /data/validator/ \ +--chain ./chainspecs/tangle-standalone.json \ +--scheme Sr25519 \ +--suri <"12-MNEMONIC-PHARSE"> \ +--key-type acco +``` + +**Aura Keys** + +```sh filename="Aura" copy +# it will ask for your suri, enter it. +./target/release/tangle-standalone key insert --base-path /data/validator/ \ +--chain ./chainspecs/tangle-standalone.json \ +--scheme Sr25519 \ +--suri <"12-MNEMONIC-PHARSE"> \ +--key-type aura +``` + +**Im-online Keys** - **these keys are optional** + +```sh filename="Imonline" copy +# it will ask for your suri, enter it. +./target/release/tangle-standalone key insert --base-path /data/validator/ \ +--chain ./chainspecs/tangle-standalone.json \ +--scheme Sr25519 \ +--suri <"12-MNEMONIC-PHARSE"> \ +--key-type imon +``` + +**DKG Keys** + +```sh filename="DKG" copy +# it will ask for your suri, enter it. +./target/release/tangle-standalone key insert --base-path /data/validator/ \ +--chain ./chainspecs/tangle-standalone.json \ +--scheme Ecdsa \ +--suri <"12-MNEMONIC-PHARSE"> \ +--key-type wdkg +``` + +**Grandpa Keys** + +```sh filename="Grandpa" copy +# it will ask for your suri, enter it. +./target/release/tangle-standalone key insert --base-path /data/validator/ \ +--chain ./chainspecs/tangle-standalone.json \ +--scheme Ed25519 \ +--suri <"12-MNEMONIC-PHARSE"> \ +--key-type gran +``` + +To ensure you have successfully generated the keys correctly run: + +```sh filename="ls" copy +ls ~/data/validator//keystore +# You should see a some file(s) there, these are the keys. +``` + +## System service setup + +Run the following commands to create the service configuration file: + +```sh filename="mv" copy +# Move the tangle-standalone binary to the bin directory (assumes you are in repo root directory) +sudo mv ./target/release/tangle-standalone /usr/bin/ +``` + +Add the following contents to the service configuration file. Make sure to replace the **USERNAME** with the username you created in the previous step, add your own node name, and update any paths or ports to your own preference. + +**Note:** The below configuration assumes you are targeting the Tangle Network chainspec. + +**Caution:** Ensure you insert the keys using the instructions at [generate keys](#generate-and-store-keys), +if you want the node to auto generate the keys, add the `--auto-insert-keys` flag. + +**Validator Node** + +```sh filename="validator.service" copy +sudo tee /etc/systemd/system/validator.service > /dev/null << EOF +[Unit] +Description=Tangle Validator Node +After=network-online.target +StartLimitIntervalSec=0 + +[Service] +User= +Restart=always +RestartSec=3 +ExecStart=/usr/bin/tangle-standalone \ + --base-path /data/validator/ \ + --name \ + --chain tangle-testnet \ + --node-key-file "/home//node-key" \ + --port 30333 \ + --validator \ + --no-mdns \ + --telemetry-url "wss://telemetry.polkadot.io/submit/ 0" --name + +[Install] +WantedBy=multi-user.target +EOF +``` + +### Enable the services + +Double check that the config has been written to `/etc/systemd/system/validator.service` correctly. +If so, enable the service so it runs on startup, and then try to start it now: + +```sh filename="enable service" copy +sudo systemctl daemon-reload +sudo systemctl enable validator +sudo systemctl start validator +``` + +Check the status of the service: + +```sh filename="status" copy +sudo systemctl status validator +``` + +You should see the node connecting to the network and syncing the latest blocks. +If you need to tail the latest output, you can use: + +```sh filename="logs" copy +sudo journalctl -u validator.service -f +``` + +If the node is running correctly, you should see an output similar to below: + +```sh filename="output" +2023-03-22 14:55:51 Tangle Standalone Node +2023-03-22 14:55:51 ✌️ version 0.1.15-54624e3-aarch64-macos +2023-03-22 14:55:51 ❤️ by Webb Technologies Inc., 2017-2023 +2023-03-22 14:55:51 📋 Chain specification: Tangle Testnet +2023-03-22 14:55:51 🏷 Node name: cooing-morning-2891 +2023-03-22 14:55:51 👤 Role: FULL +2023-03-22 14:55:51 💾 Database: RocksDb at /Users/local/Library/Application Support/tangle-standalone/chains/local_testnet/db/full +2023-03-22 14:55:51 ⛓ Native runtime: tangle-standalone-115 (tangle-standalone-1.tx1.au1) +2023-03-22 14:55:51 Bn254 x5 w3 params +2023-03-22 14:55:51 [0] 💸 generated 5 npos voters, 5 from validators and 0 nominators +2023-03-22 14:55:51 [0] 💸 generated 5 npos targets +2023-03-22 14:55:51 [0] 💸 generated 5 npos voters, 5 from validators and 0 nominators +2023-03-22 14:55:51 [0] 💸 generated 5 npos targets +2023-03-22 14:55:51 [0] 💸 new validator set of size 5 has been processed for era 1 +2023-03-22 14:55:52 🔨 Initializing Genesis block/state (state: 0xfd16…aefd, header-hash: 0x7c05…a27d) +2023-03-22 14:55:52 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. +2023-03-22 14:55:53 Using default protocol ID "sup" because none is configured in the chain specs +2023-03-22 14:55:53 🏷 Local node identity is: 12D3KooWDaeXbqokqvEMqpJsKBvjt9BUz41uP9tzRkYuky1Wat7Z +2023-03-22 14:55:53 💻 Operating system: macos +2023-03-22 14:55:53 💻 CPU architecture: aarch64 +2023-03-22 14:55:53 📦 Highest known block at #0 +2023-03-22 14:55:53 〽️ Prometheus exporter started at 127.0.0.1:9615 +2023-03-22 14:55:53 Running JSON-RPC HTTP server: addr=127.0.0.1:9933, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] +2023-03-22 14:55:53 Running JSON-RPC WS server: addr=127.0.0.1:9944, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] +2023-03-22 14:55:53 discovered: 12D3KooWMr4L3Dun4BUyp23HZtLfxoQjR56dDp9eH42Va5X6Hfgi /ip4/192.168.0.125/tcp/30304 +2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.0.125/tcp/30305 +2023-03-22 14:55:53 discovered: 12D3KooWMr4L3Dun4BUyp23HZtLfxoQjR56dDp9eH42Va5X6Hfgi /ip4/192.168.88.12/tcp/30304 +2023-03-22 14:55:53 discovered: 12D3KooWNHhcCUsZTdTkADmDJbSK9YjbtscHHA8R4jvrbGwjPVez /ip4/192.168.88.12/tcp/30305 +``` + +### Network sync + +After a validator node is started, it will start syncing with the current chain state. Depending on the size of the chain when you do this, this step may take anywhere from a few minutes to a few hours. + +Example of node sync : + +```sh filename="output after synced" copy +2021-06-17 03:07:39 🔍 Discovered new external address for our node: /ip4/10.26.16.1/tcp/30333/ws/p2p/12D3KooWLtXFWf1oGrnxMGmPKPW54xWCHAXHbFh4Eap6KXmxoi9u +2021-06-17 03:07:40 ⚙️ Syncing 218.8 bps, target=#5553764 (17 peers), best: #24034 (0x08af…dcf5), finalized #23552 (0xd4f0…2642), ⬇ 173.5kiB/s ⬆ 12.7kiB/s +2021-06-17 03:07:45 ⚙️ Syncing 214.8 bps, target=#5553765 (20 peers), best: #25108 (0xb272…e800), finalized #25088 (0x94e6…8a9f), ⬇ 134.3kiB/s ⬆ 7.4kiB/s +2021-06-17 03:07:50 ⚙️ Syncing 214.8 bps, target=#5553766 (21 peers), best: #26182 (0xe7a5…01a2), finalized #26112 (0xcc29…b1a9), ⬇ 5.0kiB/s ⬆ 1.1kiB/s +2021-06-17 03:07:55 ⚙️ Syncing 138.4 bps, target=#5553767 (21 peers), best: #26874 (0xcf4b…6553), finalized #26624 (0x9dd9…27f8), ⬇ 18.9kiB/s ⬆ 2.0kiB/s +2021-06-17 03:08:00 ⚙️ Syncing 37.0 bps, target=#5553768 (22 peers), best: #27059 (0x5b73…6fc9), finalized #26624 (0x9dd9…27f8), ⬇ 14.3kiB/s ⬆ 4.4kiB/s +``` + +### Bond TNT and setup validator Account + +After your node is synced, you are ready to setup keys and onboard as a validator, make sure to complete the steps +at [required keys](../required-keys.mdx) to start validating. + +--- + +Congratulations! You have officially setup an Tangle Network node using Systemd. If you are interested +in learning how to setup monitoring for your node, please refer to the [monitoring](../monitoring/quickstart.mdx) page. diff --git a/pages/docs/tangle-network/validator/troubleshooting.mdx b/pages/docs/tangle-network/validator/troubleshooting.mdx new file mode 100644 index 00000000..5ebbeeac --- /dev/null +++ b/pages/docs/tangle-network/validator/troubleshooting.mdx @@ -0,0 +1,108 @@ +--- +title: Troubleshooting +description: Provides a series of suggestive fixes that are common issues when starting a Tangle node. +--- + +# Logs + +If you would like to run the node with verbose logs you may add the following arguments during initial setup. You may change the target to include `debug | error | info| trace | warn`. + +``` +-ldkg=debug \ +-ldkg_metadata=debug \ +-lruntime::offchain=debug \ +-ldkg_proposal_handler=debug \ +-ldkg_proposals=debug +``` + +# Troubleshooting + +## P2P Ports Not Open + +If you don't see an Imported message (without the [Relaychain] tag), you need to check the P2P port configuration. P2P port must be open to incoming traffic. + +## In Sync + +Both chains must be in sync at all times, and you should see either Imported or Idle messages and have connected peers. + +## Genesis Mismatching + +If you notice similar log messages as below: + +``` +DATE [Relaychain] Bootnode with peer id `ID` is on a different +chain (our genesis: 0x3f5... theirs: 0x45j...) +``` + +This typically means that you are running an older version and will need to upgrade. + +## Troubleshooting for Apple Silicon users + +Install Homebrew if you have not already. You can check if you have it installed with the following command: + +```sh filename="brew" copy +brew help +``` + +If you do not have it installed open the Terminal application and execute the following commands: + +```sh filename="install brew" copy +# Install Homebrew if necessary https://brew.sh/ +/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" + +# Make sure Homebrew is up-to-date, install openssl +brew update +brew install openssl +``` + +❗ **Note:** Native ARM Homebrew installations are only going to be supported at `/opt/homebrew`. After Homebrew installs, make sure to add `/opt/homebrew/bin` to your PATH. + +```sh filename="add PATH" copy +echo 'export PATH=/opt/homebrew/bin:$PATH' >> ~/.bash_profile +``` + +An example `bash_profile` for reference may look like the following: + +```sh filename="export PATH" copy +export PATH=/opt/homebrew/bin:$PATH +export PATH=/opt/homebrew/opt/llvm/bin:$PATH +export CC=/opt/homebrew/opt/llvm/bin/clang +export AR=/opt/homebrew/opt/llvm/bin/llvm-ar +export LDFLAGS=-L/opt/homebrew/opt/llvm/lib +export CPPFLAGS=-I/opt/homebrew/opt/llvm/include +export RUSTFLAGS='-L /opt/homebrew/lib' +``` + +In order to build **dkg-substrate** in `--release` mode using `aarch64-apple-darwin` Rust toolchain you need to set the following environment variables: + +```sh filename="export" copy +echo 'export RUSTFLAGS="-L /opt/homebrew/lib"' >> ~/.bash_profile +``` + +Ensure `gmp` dependency is installed correctly. + +```sh filename="install gmp" copy +brew install gmp +``` + +If you are still receiving an issue with `gmp`, you may need to adjust your path to the `gmp` lib. Below is a suggestive fix, but paths are machine / environment specific. + +Run: + +```sh filename="clean" copy +cargo clean +``` + +Then: + +```sh filename="export" copy +export LIBRARY_PATH=$LIBRARY_PATH:$(brew --prefix)/lib:$(brew --prefix)/opt/gmp/lib +``` + +This should be added to your bash_profile as well. + +Ensure `protobuf` dependency is installed correctly. + +```sh filename="install protobuf" copy +brew install protobuf +``` diff --git a/pages/docs/tangle-network/validator/validation.mdx b/pages/docs/tangle-network/validator/validation.mdx new file mode 100644 index 00000000..992f4571 --- /dev/null +++ b/pages/docs/tangle-network/validator/validation.mdx @@ -0,0 +1,21 @@ +# Validation + +In a blockchain context, validating usually refers to the process performed by nodes (often called validators) in the network to ensure that transactions and blocks meet the necessary rules and protocols of the network. This can involve verifying that transactions are correctly signed, that they don't double-spend coins, and that newly created blocks are formatted correctly and include valid transactions. By validating data, transactions, or blocks, we ensure that the systems or networks in question operate as intended and maintain their integrity. In Proof-of-Stake systems, Validators are often incentivized and rewarded through portions of new tokens generated by inflation or otherwise. + +## Stepping into Responsibility + +Embarking on the journey to becoming a blockchain validator comes with considerable responsibility. As a validator, you are entrusted not only with your own stake but also the stake of those who nominate you. Any errors or breaches can lead to penalties known as slashing, impacting both your token balance and your standing within the network. However, being a validator can be immensely rewarding, offering you the opportunity to actively contribute to the security of a decentralized network and grow your digital assets. + +## Proceed with Caution + +We strongly advise that you possess substantial system administration experience before choosing to run your own validator. The role goes beyond merely running a blockchain binary; it requires the ability to address and resolve technical issues and anomalies independently. Running a validator is as much about knowledge as it is about problem-solving skills. + +## Security: Your Priority + +Security is paramount when running a successful validator. You should thoroughly familiarize yourself with the secure validator guidelines to understand the considerations when setting up your infrastructure. As you grow and evolve as a validator, these guidelines can serve as a foundation upon which you build your modifications and customizations. + +## Your Support Network + +Remember, you are not alone in this journey. We encourage you to connect with the [Webb community](https://webb.tools/community). These communities are teeming with experienced team members and fellow validators who are more than willing to answer questions, provide insights, and share valuable experiences. Additionally, you will want to make community members aware of your validator services, so they can nominate their stake to you. + +Embarking on the validator journey is both challenging and rewarding. With careful preparation, a strong understanding of the associated responsibilities and risks, and the support of the community, you can make significant contributions to the Webb ecosystem. diff --git a/pages/docs/tangle-network/validator/validator-rewards.mdx b/pages/docs/tangle-network/validator/validator-rewards.mdx new file mode 100644 index 00000000..3168e931 --- /dev/null +++ b/pages/docs/tangle-network/validator/validator-rewards.mdx @@ -0,0 +1,125 @@ +--- +title: Validator Rewards +description: A brief overview of Tangle Network rewards and their payout scheme. +--- + +# Validator Rewards + +Running a [validator](validation.mdx) node on the Tangle Network allows you to connect to the network, sync with a bootnode, obtain local access to RPC endpoints, and also author blocks. The network rewards successful validators (users running validator nodes and actively producing blocks) by paying a set amount of network tokens as rewards. Validators are chosen using an algorithm [AURA](https://docs.substrate.io/reference/glossary/#authority-round-aura) that works to give every validator in the active set, a chance at authoring a block. + +## How Rewards are Calculated + +## Era Points + +For every era (a period of time approximately 6 hours in length in Tangle), validators are paid proportionally to the amount of _era points_ they have collected. Era +points are reward points earned for payable actions like: + +- producing a non-uncle block in the Chain. +- producing a reference to a previously unreferenced uncle block. +- producing a referenced uncle block. + +An uncle block is a block that is valid in every regard, but which failed to become +canonical. This can happen when two or more validators are block producers in a single slot, and the +block produced by one validator reaches the next block producer before the others. We call the +lagging blocks uncle blocks. + +Payments occur at the end of every era. + +Era points create a probabilistic component for staking rewards. + +If the _mean_ of staking rewards is the average rewards per era, then the _variance_ is the +variability from the average staking rewards. The exact TNT value of each era point is not known in +advance since it depends on the total number of points earned by all validators in a given era. This +is designed this way so that the total payout per era depends on Tangle's inflation model, and not on the number of payable +actions (f.e., authoring a new block) executed. + +In this case, analyzing the _expected value_ of staking rewards will paint a better picture as the +weight of era points of validators and para-validators in the reward average are taken into +consideration. + +#### High-level breakdown of reward variance + +This should only serve as a high-level overview of the probabilistic nature for staking rewards. + +Let: + +- `pe` = para-validator era points, +- `ne` = non-para-validator era points, +- `EV` = expected value of staking rewards, + +Then, `EV(pe)` has more influence on the `EV` than `EV(ne)`. + +Since `EV(pe)` has a more weighted probability on the `EV`, the increase in variance against the +`EV` becomes apparent between the different validator pools (aka. validators in the active set and +the ones chosen to para-validate). + +Also, let: + +- `v` = the variance of staking rewards, +- `p` = number of para-validators, +- `w` = number validators in the active set, +- `e` = era, + +Then, `v` ↑ if `w` ↑, as this reduces `p` : `w`, with respect to `e`. + +Increased `v` is expected, and initially keeping `p` ↓ using the same para-validator set for +all parachains ensures availability and approval voting. In addition, despite `v` ↑ on an `e` to `e` +basis, over time, the amount of rewards each validator receives will equal out based on the +continuous selection of para-validators. + +## Payout Scheme + +No matter how much total stake is behind a validator, all validators split the block authoring +payout essentially equally. The payout of a specific validator, however, may differ based on +era points, as described above. Although there is a probabilistic component to +receiving era points, and they may be impacted slightly depending on factors such as network +connectivity, well-behaving validators should generally average out to having similar era point +totals over a large number of eras. + +Validators may also receive "tips" from senders as an incentive to include transactions in their +produced blocks. Validators will receive 100% of these tips directly. + +For simplicity, the examples below will assume all validators have the same amount of era points, +and received no tips. + +``` +Validator Set Size (v): 4 +Validator 1 Stake (v1): 18 tokens +Validator 2 Stake (v2): 9 tokens +Validator 3 Stake (v3): 8 tokens +Validator 4 Stake (v4): 7 tokens +Payout (p): 8 TNT + +Payout for each validator (v1 - v4): +p / v = 8 / 4 = 2 tokens +``` + +Note that this is different than most other Proof-of-Stake systems such as Cosmos. As long as a +validator is in the validator set, it will receive the same block reward as every other validator. +Validator `v1`, who had 18 tokens staked, received the same reward (2 tokens) in this era as `v4` +who had only 7 tokens staked. + +## Slashing + +Although rewards are paid equally, slashes are relative to a validator's stake. Therefore, if you do +have enough TNT to run multiple validators, it is in your best interest to do so. A slash of 30% +will, of course, be more TNT for a validator with 18 TNT staked than one with 9 TNT staked. + +Running multiple validators does not absolve you of the consequences of misbehavior. Polkadot +punishes attacks that appear coordinated more severely than individual attacks. You should not, for +example, run multiple validators hosted on the same infrastructure. A proper multi-validator +configuration would ensure that they do not fail simultaneously. + +Nominators have the incentive to nominate the lowest-staked validator, as this will result in the +lowest risk and highest reward. This is due to the fact that while their vulnerability to slashing +remains the same (since it is percentage-based), their rewards are higher since they will be a +higher proportion of the total stake allocated to that validator. + +To clarify this, let us imagine two validators, `v1` and `v2`. Assume both are in the active set, +have commission set to 0%, and are well-behaved. The only difference is that `v1` has 90 TNT +nominating it and `v2` only has 10. If you nominate `v1`, it now has `90 + 10 = 100` TNT, and you +will get 10% of the staking rewards for the next era. If you nominate `v2`, it now has +`10 + 10 = 20` TNT nominating it, and you will get 50% of the staking rewards for the next era. In +actuality, it would be quite rare to see such a large difference between the stake of validators, +but the same principle holds even for smaller differences. If there is a 10% slash of either +validator, then you will lose 1 TNT in each case.