diff --git a/docs/source/deployment_guide_overview.rst b/docs/source/deployment_guide_overview.rst index 9d26a738078..22d10e703b8 100644 --- a/docs/source/deployment_guide_overview.rst +++ b/docs/source/deployment_guide_overview.rst @@ -62,7 +62,7 @@ Generally speaking, Fabric is agnostic to the methods used to deploy and manage As long as you have the ability to deploy containers, whether locally (or behind a firewall), or in a cloud, it should be possible to stand up components and connect them to each other. However, Kubernetes features a number of helpful tools that have made it a popular container management platform for deploying and managing Fabric networks. For more information about Kubernetes, check out `the Kubernetes documentation `_. This topic will mostly limit its scope to the binaries and provide instructions that can be applied when using a Docker deployment or Kubernetes. -However and wherever you choose to deploy your components, you will need to make sure you have enough resources for the components to run effectively. The sizes you need will largely depend on your use case. If you plan to join a single peer to several high volume channels, it will need much more CPU and memory than if you only plan to join to a single channel. As a rough estimate, plan to dedicate approximately three times the resources to a peer as you plan to allocate to a single ordering node (as you will see below, it is recommended to deploy at least three and optimally five nodes in an ordering service). Similarly, you should need approximately a tenth of the resources for a CA as you will for a peer. You will also need to add storage to your cluster (some cloud providers may provide storage) as you cannot configure Persistent Volumes and Persistent Volume Claims without storage being set up with your cloud provider first. +However and wherever you choose to deploy your components, you will need to make sure you have enough resources for the components to run effectively. The sizes you need will largely depend on your use case. If you plan to join a single peer to several high volume channels, it will need much more CPU and memory than if you only plan to join to a single channel. As a rough estimate, plan to dedicate approximately three times the resources to a peer as you plan to allocate to a single ordering node (as you will see below, it is recommended to deploy at least three and optimally five nodes in an ordering service). Similarly, you should need approximately a tenth of the resources for a CA as you will for a peer. You will also need to add storage to your cluster (some cloud providers may provide storage) as you cannot configure Persistent Volumes and Persistent Volume Claims without storage being set up with your cloud provider first. The use of persistent storage ensures that data such as MSPs, ledgers, and installed chaincodes are not stored on the container filesystem, preventing them from being destroyed if the containers are destroyed. By deploying a proof of concept network and testing it under load, you will have a better sense of the resources you will require. @@ -160,7 +160,7 @@ When you're comfortable with how your peer has been configured, how your volumes .. toctree:: :maxdepth: 1 - :caption: Deploying a Production Peer + :caption: Deploying a production peer deploypeer/peerplan deploypeer/peerchecklist @@ -171,31 +171,35 @@ When you're comfortable with how your peer has been configured, how your volumes Creating an ordering node ~~~~~~~~~~~~~~~~~~~~~~~~~ -Unlike the creation of a peer, you will need to create a genesis block (or reference a block that has already been created, if adding an ordering node to an existing ordering service) and specify the path to it before launching the ordering node. +Note: while it is possible to add additional nodes to an ordering service, only the process for creating an ordering service is covered in these tutorials. -In Fabric, this configuration file for ordering nodes is called ``orderer.yaml``. You can find a sample ``orderer.yaml`` configuration file `in the sampleconfig directory of Hyperledger Fabric `_. Note that ``orderer.yaml`` is different than the "genesis block" of an ordering service. This block, which includes the initial configuration of the orderer system channel, must be created before an ordering node is created because it is used to bootstrap the node. +If you’ve read through the key concept topic on :doc:`orderer/ordering_service`, you should have a good idea of the role the ordering service plays in a network and the nature of its interactions with other network components. The ordering service is responsible for literally “ordering” endorsed transactions into blocks, which peers then validate and commit to their ledgers. -As with the peer, you will see that there are quite a number of parameters you either have the option to set or will need to set for your node to work properly. In general, if you do not have the need to change a tuning value, leave it alone. +These roles are important to understand before you create an ordering service, as it will influence your customization and deployment decisions. Among the chief differences between a peer and ordering service is that in a production network, multiple ordering nodes work together to form the “ordering service” of a channel. This creates a series of important decisions that need to be made at both the node level and at the cluster level. Some of these cluster decisions are not made in individual ordering node ``orderer.yaml`` files but instead in the ``configtx.yaml`` file that is used to generate the genesis block for the system channel (which is used to bootstrap ordering nodes), and also used to generate the genesis block of application channels. For a look at the various decisions you will need to make, check out :doc:`deployorderer/ordererplan`. -Either way, here are some values in ``orderer.yaml`` you must review. You will notice that some of these fields are the same as those in ``core.yaml`` only with different names. +The configuration values in an ordering node’s ``orderer.yaml`` file must be customized or overridden with environment variables. You can find the default ``orderer.yaml`` configuration file `in the sampleconfig directory of Hyperledger Fabric `_. -* ``General.LocalMSPID``: this is the name of the local MSP, generated by your CA, of your orderer organization. +This configuration file is bundled with the orderer image and is also included with the downloadable binaries. For information about how to download the production ``orderer.yaml`` along with the orderer image, check out :doc:`deployorderer/ordererdeploy`. -* ``General.LocalMSPDir``: the place where the local MSP for the ordering node is located. Note that it is a best practice to mount this volume external to your container. +While there are many parameters in the default ``orderer.yaml``, you will only need to customize a small percentage of them. In general, if you do not have the need to change a tuning value, keep the default value. -* ``General.ListenAddress`` and ``General.ListenPort``: represents the endpoint to other ordering nodes in the same organization. +Among the parameters in ``orderer.yaml``, there are: -* ``FileLedger``: although ordering nodes do not have a state database, they still all carry copies of the blockchain, as this allows them to verify permissions using the latest config block. Therefore the ledger fields should be customized with the correct file path. +* **Identifiers**: these include not just the paths to the relevant local MSP and Transport Layer Security (TLS) certificates, but also the MSP ID of the organization that owns the ordering node. -* ``Cluster``: these values are important for ordering service nodes that communicate with other ordering nodes, such as in a Raft based ordering service. +* **Addresses and paths**: because ordering nodes interact with other components, you must specify a series of addresses in the configuration. These include addresses where the ordering node itself can be found by other components as well as **Operations and metrics**, which allow you to set up methods for monitoring the health and performance of your ordering node through the configuration of endpoints. -* ``General.BootstrapFile``: this is the name of the configuration block used to bootstrap an ordering node. If this node is the first node generated in an ordering service, this file will have to be generated and is known as the "genesis block". +For more information about ``orderer.yaml`` and its specific parameters, check out :doc:`deployorderer/ordererchecklist`. -* ``General.BootstrapMethod``: the method by which the bootstrap block is given. For now, this can only be ``file``, in which the file in the ``BootstrapFile`` is specified. Starting in 2.0, you can specify ``none`` to simply start the orderer without bootstrapping. +When you're comfortable with how your ordering node has been configured, how your volumes are mounted, and your backend configuration, you can run the command to launch the ordering node (this command will depend on your backend configuration). -* ``Consensus``: determines the key/value pairs allowed by the consensus plugin (Raft ordering services are supported and recommended) for the Write Ahead Logs (``WALDir``) and Snapshots (``SnapDir``). +.. toctree:: + :maxdepth: 1 + :caption: Deploying a production ordering node -When you're comfortable with how your ordering node has been configured, how your volumes are mounted, and your backend configuration, you can run the command to launch the ordering node (this command will depend on your backend configuration). + deployorderer/ordererplan + deployorderer/ordererchecklist + deployorderer/ordererdeploy Next steps ---------- diff --git a/docs/source/deployorderer/ordererchecklist.md b/docs/source/deployorderer/ordererchecklist.md new file mode 100644 index 00000000000..dbeb04bb1ed --- /dev/null +++ b/docs/source/deployorderer/ordererchecklist.md @@ -0,0 +1,324 @@ +# Checklist for a production ordering node + +As you prepare to build a production ordering service (or a single ordering node), you need to customize the configuration by editing the [orderer.yaml](https://github.com/hyperledger/fabric/blob/{BRANCH}/sampleconfig/orderer.yaml) file, which is copied into the `/config` directory when downloading the Fabric binaries, and available within the Fabric ordering node image at `/etc/hyperledger/fabric/orderer.yaml`. + +While in a production environment you could override the environment variables in the `orderer.yaml` file in your Docker container or your Kubernetes job, these instructions show how to edit `orderer.yaml` instead. It’s important to understand the parameters in the configuration file and their dependencies on other parameter settings in the file. Blindly overriding one setting using an environment variable could affect the functionality of another setting. Therefore, the recommendation is that before starting the ordering node, you make the modifications to the settings in the configuration file to become familiar with the available settings and how they work. Afterwards, you may choose to override these parameters using environment variables. + +This checklist covers key configuration parameters for setting up a production ordering service. Of course, you can always refer to the orderer.yaml file for additional parameters or more information. It also provides guidance on which parameters should be overridden. The list of parameters that you need to understand and that are described in this topic include: + +* [General.ListenAddress](#general-listenaddress) +* [General.ListenPort](#general-listenport) +* [General.TLS.*](#general-tls) +* [General.Keepalive.*](#general-keepalive) +* [General.Cluster.*](#general-cluster) +* [General.BoostrapMethod](#general-bootstrapmethod) +* [General.BoostrapFile](#general-bootstrapfile) +* [General.LocalMSPDir](#general-localmspdir) +* [General.LocalMSPID](#general-localmspid) +* [FileLedger.Location](#fileledger-location) +* [Operations.*](#operations) +* [Metrics.*](#metrics) +* [Consensus.*](#consensus) + +## General.ListenAddress + +``` +# Listen address: The IP on which to bind to listen. +ListenAddress: 127.0.0.1 +``` + +* **`ListenAddress`**: (default value should be overridden) This is the location where the orderer will listen, for example, `0.0.0.0`. Note: unlike the peer, the `orderer.yaml` separates the address and the port, hence the [General.ListenPort](#general-listenport) parameter. + +## General.ListenPort + +``` +# Listen port: The port on which to bind to listen. +ListenPort: 7050 +``` + +* **`ListenPort`**: (default value should be overridden) This is the port that the orderer listens on. + +## General.TLS + +``` +Enabled: false +# PrivateKey governs the file location of the private key of the TLS certificate. +PrivateKey: tls/server.key +# Certificate governs the file location of the server TLS certificate. +Certificate: tls/server.crt +RootCAs: + - tls/ca.crt +ClientAuthRequired: false +ClientRootCAs: +``` + +* **`Enabled`**: (default value should be overridden) In a production network, you should be using TLS-secured communications. This value should be `true`. +* **`PrivateKey`**: (default value should be overridden). Provide the path to, and filename of, the private key generated by your TLS CA for this node. +* **`Certificate`**: (default value should be overridden) Provide the path to, and filename of, the public certificate (also known as the sign certificate) generated by your TLS CA for this node. +* **`RootCAs`**: (should be commented out) This parameter is typically unset for normal use. It is a list of the paths to additional root certificates used for verifying certificates of other orderer nodes during outbound connections. It can be used to augment the set of TLS CA certificates available from the MSPs of each channel's configuration. +* **`ClientAuthRequired`**: (Mutual TLS only) Setting this value to “true” will enable mutual TLS on your network, and must be done for the entire network, not just one node. +* **`ClientRootCAs`**: (Mutual TLS only) Can be left blank if mutual TLS is disabled. If mutual TLS is enabled, this is a list of the paths to additional root certificates used for verifying certificates of client connections. It can be used to augment the set of TLS CA certificates available from the MSPs of each channel’s configuration. + +## General.KeepAlive + +The `KeepAlive` values might need to be tuned for compatibility with any networking devices or software (like firewalls or proxies) in between components of your network. Ideally, these settings would be manipulated if needed in a test or pre-prod environment and then set accordingly for your production environment. + +``` +# ServerMinInterval is the minimum permitted time between client pings. +# If clients send pings more frequently, the server will +# disconnect them. +ServerMinInterval: 60s +# ServerInterval is the time between pings to clients. +ServerInterval: 7200s +# ServerTimeout is the duration the server waits for a response from +# a client before closing the connection. +ServerTimeout: 20s +``` + +* **`ServerMinInterval`**: (default value should not be overridden, unless determined necessary through testing) +* **`ServerInterval`**: (default value should not be overridden, unless determined necessary through testing) +* **`ServerTimeout`**: (default value should not be overridden, unless determined necessary through testing) + +## General.Cluster + +``` +# SendBufferSize is the maximum number of messages in the egress buffer. +# Consensus messages are dropped if the buffer is full, and transaction +# messages are waiting for space to be freed. +SendBufferSize: 10 +# ClientCertificate governs the file location of the client TLS certificate +# If not set, the server General.TLS.Certificate is re-used. +ClientCertificate: +# If not set, the server General.TLS.PrivateKey is re-used. +ClientPrivateKey: +# The below 4 properties should be either set together, or be unset together. +# If they are set, then the orderer node uses a separate listener for intra-cluster +# communication. If they are unset, then the general orderer listener is used. +# This is useful if you want to use a different TLS server certificates on the +# client-facing and the intra-cluster listeners. + +# ListenPort defines the port on which the cluster listens to connections. +ListenPort: +# ListenAddress defines the IP on which to listen to intra-cluster communication. +ListenAddress: +# ServerCertificate defines the file location of the server TLS certificate used for intra-cluster +# communication. +ServerCertificate: +# ServerPrivateKey defines the file location of the private key of the TLS certificate. +ServerPrivateKey: +``` + +If unset, the `ClientCertificate` and `ClientPrivateKey` default to the server `General.TLS.Certificate` and `General.TLS.PrivateKey` when the orderer is not configured to use a separate cluster port. + +* **`ClientCertificate`**: Provide the path to, and filename of, the public certificate (also known as a signed certificate) generated by your TLS CA for this node. +* **`ClientPrivateKey`**: Provide the path to, and filename of, the private key generated by your TLS CA for this node. + +In general, these four parameters would only need to be configured if you want to configure a separate listener and TLS certificates for intra-cluster communication (with other Raft orderers), as opposed to using the listener that peer clients and application clients utilize. This is an advanced deployment option. These four parameters should be set together or left unset, and if they are set, note that the `ClientCertificate` and `ClientPrivateKey` must be set as well. + +* **`ListenPort`** +* **`ListenAddress`** +* **`ServerCertificate`** +* **`ServerPrivateKey`** + +## General.BoostrapMethod + +``` +# Bootstrap method: The method by which to obtain the bootstrap block +# system channel is specified. The option can be one of: +# "file" - path to a file containing the genesis block or config block of system channel +# "none" - allows an orderer to start without a system channel configuration +BootstrapMethod: file +``` + +* **`BootstrapMethod`**: (default value should not be overridden) Unless you plan to use a file type other than “file”, this value should be left as is. + +## General.BoostrapFile + +``` +# Bootstrap file: The file containing the bootstrap block to use when +# initializing the orderer system channel and BootstrapMethod is set to +# "file". The bootstrap file can be the genesis block, and it can also be +# a config block for late bootstrap of some consensus methods like Raft. +# Generate a genesis block by updating $FABRIC_CFG_PATH/configtx.yaml and +# using configtxgen command with "-outputBlock" option. +# Defaults to file "genesisblock" (in $FABRIC_CFG_PATH directory) if not specified. +BootstrapFile: +``` + +* **`BoostrapFile`**: (default value should be overridden) Specify the location and name of the system channel genesis block to use when this node is created. + +## General.LocalMSPDir + +``` +# LocalMSPDir is where to find the private crypto material needed by the +# orderer. It is set relative here as a default for dev environments but +# should be changed to the real location in production. +LocalMSPDir: msp +``` + +**`LocalMSPDir`**: (default value will often be overriden be overridden) This is the path to the ordering node's local MSP, which must be created before it can be deployed. The path can be absolute or relative to `FABRIC_CFG_PATH` (by default, it is `/etc/hyperledger/fabric` in the orderer image). Unless an absolute path is specified to a folder named something other than "msp", the ordering node defaults to looking for a folder called “msp” at the path (in other words, `FABRIC_CFG_PATH/msp`) and when using the orderer image: `/etc/hyperledger/fabric/msp`. If you are using the recommended folder structure described in the [Registering and enrolling identities with a CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html) topic, it would be relative to the `FABRIC_CFG_PATH` as follows: +`config/organizations/ordererOrganizations/org0.example.com/orderers/orderer0.org0.example.com/msp`. **The best practice is to store this data in persistent storage**. This prevents the MSP from being lost if your orderer containers are destroyed for some reason. + +## General.LocalMSPID + +``` +# LocalMSPID is the identity to register the local MSP material with the MSP +# manager. IMPORTANT: The local MSP ID of an orderer needs to match the MSP +# ID of one of the organizations defined in the orderer system channel's +# /Channel/Orderer configuration. The sample organization defined in the +# sample configuration provided has an MSP ID of "SampleOrg". +LocalMSPID: SampleOrg +``` + +* **`LocalMSPID`**: (default value should be overridden) The MSP ID must match the orderer organization MSP ID that exists in the configuration of the system channel. This means the MSP ID must have been listed in the `configtx.yaml` used to create the genesis block of the system channel (or have been added later to the list of system channel administrators). + +## General.BCCSP.* + +``` +# Default specifies the preferred blockchain crypto service provider + # to use. If the preferred provider is not available, the software + # based provider ("SW") will be used. + # Valid providers are: + # - SW: a software based crypto provider + # - PKCS11: a CA hardware security module crypto provider. + Default: SW + + # SW configures the software based blockchain crypto provider. + SW: + # TODO: The default Hash and Security level needs refactoring to be + # fully configurable. Changing these defaults requires coordination + # SHA2 is hardcoded in several places, not only BCCSP + Hash: SHA2 + Security: 256 + # Location of key store. If this is unset, a location will be + # chosen using: 'LocalMSPDir'/keystore + FileKeyStore: + KeyStore: +``` + +(Optional) This section is used to configure the Blockchain crypto provider. + +* **`BCCSP.Default:`** If you plan to use a Hardware Security Module (HSM), then this must be set to `PKCS11`. + +``` +# Settings for the PKCS#11 crypto provider (i.e. when DEFAULT: PKCS11) + PKCS11: + # Location of the PKCS11 module library + Library: + # Token Label + Label: + # User PIN + Pin: + Hash: + Security: + FileKeyStore: + KeyStore: +``` + +* **`BCCSP.PKCS11.*:`** Provide this set of parameters according to your HSM configuration. Refer to this [example](../hsm.html) of an HSM configuration for more information. + +## FileLedger.Location + +``` +# Location: The directory to store the blocks in. +Location: /var/hyperledger/production/orderer +``` + +* **`Location`**: (default value should be overridden in the unlikely event where two ordering nodes are running on the same node) Every channel on which the node is a consenter will have its own subdirectory at this location. The user running the orderer needs to own and have write access to this directory. **The best practice is to store this data in persistent storage**. This prevents the ledger from being lost if your orderer containers are destroyed for some reason. + +## Operations.* + +The operations service is used for monitoring the health of the ordering node and relies on mutual TLS to secure its communication. Therefore, you need to set `operations.tls.clientAuthRequired` to `true`. When this parameter is set to `true`, clients attempting to ascertain the health of the node are required to provide a valid certificate for authentication. If the client does not provide a certificate or the service cannot verify the client’s certificate, the request is rejected. This means that the clients will need to register with the ordering node's TLS CA and provide their TLS signing certificate on the requests. See [The Operations Service](../operations_service.html) to learn more. + +If you plan to use Prometheus [metrics](#metrics) to monitor your ordering node, you must configure the operations service here. + +In the unlikely case where two ordering nodes are running on the same node on your infrastructure, you need to modify the addresses for the second ordering node to use a different port. Otherwise, when you start the second ordering node, it will fail to start, reporting that the addresses are already in use. + +``` +# host and port for the operations server + ListenAddress: 127.0.0.1:8443 + + # TLS configuration for the operations endpoint + TLS: + # TLS enabled + Enabled: false + + # Certificate is the location of the PEM encoded TLS certificate + Certificate: + + # PrivateKey points to the location of the PEM-encoded key + PrivateKey: + + # Most operations service endpoints require client authentication when TLS + # is enabled. ClientAuthRequired requires client certificate authentication + # at the TLS layer to access all resources. + ClientAuthRequired: false + + # Paths to PEM encoded ca certificates to trust for client authentication + ClientRootCAs: [] +``` + +* **`ListenAddress`**: (required when using the operations service) Specify the address and port of the operations server. +* **`Enabled`**: (required when using the operations service) Must be `true` if the operations service is being used. +* **`Certificate`**: (required when using the operations service) Can be the same file as the `General.TLS.Certificate`. +* **`PrivateKey`**: (required when using the operations service) Can be the same file as the `General.TLS.PrivateKey`. +* **`ClientAuthRequired`**: (required when using the operations service) Must be set to `true` to enable mutual TLS between the client and the server. +* **`ClientRootCAs`**: (required when using the operations service) Similar to the client root CA cert file in TLS, it contains a list of client root CA certificates that can be used to verify client certificates. If the client enrolled with the orderer organization CA, then this value is the orderer organization root CA cert. + +## Metrics.* + +By default this is disabled, but if you want to monitor the metrics for the orderer, you need to use `StatsD` or `Prometheus` as your metric provider. `StatsD` uses a "push" model, pushing metrics from the ordering node to a `StatsD` endpoint. Because of this, it does not require configuration of the operations service itself. `Prometheus` metrics, by contrast, are pulled from an ordering node. + +For more information about the available `Prometheus` metrics, check out [Prometheus](../metrics_reference.html#prometheus) + +For more information about the available `StatsD` metrics, check out [StatsD](../metrics_reference.html#statsd). + +Because Prometheus utilizes a "pull" model there is not any configuration required, beyond making the operations service available. Rather, Prometheus will send requests to the operations URL to poll for available metrics. + +``` + # The metrics provider is one of statsd, prometheus, or disabled + Provider: disabled + + # The statsd configuration + Statsd: + # network type: tcp or udp + Network: udp + + # the statsd server address + Address: 127.0.0.1:8125 + + # The interval at which locally cached counters and gauges are pushed + # to statsd; timings are pushed immediately + WriteInterval: 30s + + # The prefix is prepended to all emitted statsd metrics + Prefix: +``` + +* **`Provider`**: Set this value to `statsd` if using `StatsD` or `prometheus` if using `Prometheus`. +* **`Statsd.Address`**: (required to use `StatsD` metrics for the ordering node) When `StatsD` is enabled, you will need to configure the `hostname` and `port` of the `StatsD` server so that the ordering node can push metric updates. + +## Consensus.* + +The values of this section vary by consensus plugin. The values below are for the `etcdraft` consensus plugin. If you are using a different consensus plugin, refer to its documentation for allowed keys and recommended values. + +``` +# The allowed key-value pairs here depend on consensus plugin. For etcd/raft, +# we use following options: + +# WALDir specifies the location at which Write Ahead Logs for etcd/raft are +# stored. Each channel will have its own subdir named after channel ID. +WALDir: /var/hyperledger/production/orderer/etcdraft/wal + +# SnapDir specifies the location at which snapshots for etcd/raft are +# stored. Each channel will have its own subdir named after channel ID. +SnapDir: /var/hyperledger/production/orderer/etcdraft/snapshot +``` + +* **`WALDir`**: (default value should be overridden) This is the path to the write ahead logs on the local filesystem of the ordering node. It can be an absolute path or relative to `FABRIC_CFG_PATH`. It defaults to `/var/hyperledger/production/orderer/etcdraft/wal`. Each channel will have its own subdirectory named after the channel ID. The user running the ordering node needs to own and have write access to this directory. **The best practice is to store this data in persistent storage**. This prevents the write ahead log from being lost if your orderer containers are destroyed for some reason. +* **`SnapDir`**: (default value should be overridden) This is the path to the snapshots on the local filesystem of the ordering node. For more information about how snapshots work in a Raft ordering service, check out [Snapshots](../orderer/ordering_service.html#snapshots)/ It can be an absolute path or relative to `FABRIC_CFG_PATH`. It defaults to `/var/hyperledger/production/orderer/etcdraft/wal`. Each channel will have its own subdirectory named after the channel ID. The user running the ordering node needs to own and have write access to this directory. **The best practice is to store this data in persistent storage**. This prevents snapshots from being lost if your orderer containers are destroyed for some reason. + +For more information about ordering node configuration, including how to set parameters that are not available in `orderer.yaml`, check out [Configuring and operating a Raft ordering service](../raft_configuration.html). + + diff --git a/docs/source/deployorderer/ordererdeploy.md b/docs/source/deployorderer/ordererdeploy.md new file mode 100644 index 00000000000..9cde632c34a --- /dev/null +++ b/docs/source/deployorderer/ordererdeploy.md @@ -0,0 +1,296 @@ +# Deploy the ordering service + +Before deploying an ordering service, review the material in [Planning for an ordering service](./ordererplan.html) and [Checklist for a production ordering service](./ordererchecklist.html), which discusses all of the relevant decisions you need to make and parameters you need to configure before deploying an ordering service. + +This tutorial is based on the Raft consensus protocol and can be used to build an ordering service, which is comprised of ordering nodes, or "orderers". It describes the process to create a three-node Raft ordering service where all of the ordering nodes belong to the same organization. + +## Download the ordering service binary and configuration files + +To get started, download the Fabric binaries from [GitHub](https://github.com/hyperledger/fabric/releases) to a folder on your local system, for example `fabric/`. In GitHub, scroll to the Fabric release you want to download, click the **Assets** twistie, and select the binary for your system type. After you extract the `.zip` file, you will find all of the Fabric binaries in the `/bin` folder and the associated configuration files in the `/config` folder. +The resulting folder structure is similar to: + +``` +├── fabric + ├── bin + │ ├── configtxgen + │ ├── configtxlator + │ ├── cryptogen + │ ├── discover + │ ├── idemixgen + │ ├── orderer + │ └── osnadmin + └── config + ├── configtx.yaml + ├── orderer.yaml + └── core.yaml +``` + +Along with the relevant binary file, the orderer configuration file, `orderer.yaml` is required to launch an orderer on the network. The other files are not required for the orderer deployment but are useful when you attempt to create or edit channels, among other tasks. + +**Tip:** Add the location of the orderer binary to your `PATH` environment variable so that it can be picked up without fully qualifying the path to the binary executable, for example: + +``` +export PATH=/bin:$PATH +``` + +After you have mastered deploying and running an ordering service by using the orderer binary and `orderer.yaml` configuration file, it is likely that you will want to use an orderer container in a Kubernetes or Docker deployment. The Hyperledger Fabric project publishes an [orderer image](https://hub.docker.com/r/hyperledger/fabric-orderer) that can be used for development and test, and various vendors provide supported orderer images. For now though, the purpose of this topic is to teach you how to properly use the orderer binary so you can take that knowledge and apply it to the production environment of your choice. + +## Prerequisites + +Before you can launch an orderer in a production network, you need to make sure you've created and organized the necessary certificates, generate the genesis block, decided on storage, and configured `orderer.yaml`. + +### Certificates + +While **cryptogen** is a convenient utility that can be used to generate certificates for a test environment, it should **never** be used on a production network. The core requirement for certificates for Fabric nodes is that they are Elliptic Curve (EC) certificates. You can use any tool you prefer to issue these certificates (for example, OpenSSL). However, the Fabric CA streamlines the process because it generates the Membership Service Providers (MSPs) for you. + +Before you can deploy the orderer, create the recommended folder structure for the orderer or orderer certificates that is described in the [Registering and enrolling identities with a CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html) topic to store the generated certificates and MSPs. + +This folder structure isn't mandatory, but these instructions presume you have created it: + +``` +├── organizations + └── ordererOrganizations + └── ordererOrg1.example.com + ├── msp + ├── cacerts + └── tlscacerts + ├── orderers + └── orderer0.ordererOrg1.example.com + ├── msp + └── tls +``` + +You should have already used your certificate authority of choice to generate the orderer enrollment certificate, TLS certificate, private keys, and the MSPs that Fabric must consume. Refer to the [CA deployment guide](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/cadeploy.html) and [Registering and enrolling identities with a CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html) topics for instructions on how to create a Fabric CA and how to generate these certificates. You need to generate the following sets of certificates: + - **Orderer organization MSP** + - **Orderer TLS CA certificates** + - **Orderer local MSP (enrollment certificate and private key of the orderer)** + +You will either need to use the Fabric CA client to generate the certificates directly into the recommended folder structure or you will need to copy the generated certificates to their recommended folders after they are generated. Whichever method you choose, most users are ultimately likely to script this process so it can be repeated as needed. A list of the certificates and their locations is provided here for your convenience. + +If you are using a containerized solution for running your network (which for obvious reasons is a popular choice), **it is a best practice to mount volumes for the certificate directories external to the container where the node itself is running. This will allow the certificates to be used by an ordering node container, regardless whether the ordering node container goes down, becomes corrupted, or is restarted.** + +#### TLS certificates + +For the ordering node to launch successfully, the locations of the TLS certificates you specified in the [Checklist for a production ordering node](./ordererchecklist.html) must point to the correct certificates. To do this: + +- Copy the **TLS CA Root certificate**, which by default is called `ca-cert.pem`, to the orderer organization MSP definition `organizations/ordererOrganizations/ordererOrg1.example.com/msp/tlscacerts/tls-cert.pem`. +- Copy the **CA Root certificate**, which by default is called `ca-cert.pem`, to the orderer organization MSP definition `organizations/ordererOrganizations/ordererOrg1.example.com/msp/cacerts/ca-cert.pem`. +- When you enroll the orderer identity with the TLS CA, the public key is generated in the `signcerts` folder, and the private key is located in the `keystore` directory. Rename the private key in the `keystore` folder to `orderer0-tls-key.pem` so that it can be easily recognized later as the TLS private key for this node. +- Copy the orderer TLS certificate and private key files to `organizations/ordererOrganizations/ordererOrg1.example.com/orderers/orderer0.ordererOrg1.example.com/tls`. The path and name of the certificate and private key files correspond to the values of the `General.TLS.Certificate` and `General.TLS.PrivateKey` parameters in the `orderer.yaml`. + +**Note:** Don't forget to create the `config.yaml` file and add it to the organization MSP and local MSP folder for each ordering node. This file enables Node OU support for the MSP, an important feature that allows the MSP's admin to be identified based on an "admin" OU in an identity's certificate. Learn more in the [Fabric CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html#nodeous) documentation. + +If you are using a containerized solution for running your network (which for obvious reasons is a popular choice), **it is a best practice to mount volumes for the certificate directories external to the container where the node itself is running. This will allow the certificates to be used by an ordering node container, regardless whether the ordering node container goes down, becomes corrupted, or is restarted.** + +#### Orderer local MSP (enrollment certificate and private key) + +Similarly, you need to point to the [local MSP of your orderer](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html#create-the-local-msp-of-a-node) by copying the MSP folder to `organizations/ordererOrganizations/ordererOrg1.example.com/orderers/orderer0.ordererOrg1.example.com/msp`. This path corresponds to the value of the `General.LocalMSPDir` parameter in the `orderer.yaml` file. Because of the Fabric concept of ["Node Organization Unit (OU)"](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html#nodeous), you do not need to specify an admin of the orderer when bootstrapping. Rather, the role of "admin" is conferred onto an identity by setting an OU value of "admin" inside a certificate and enabled by the `config.yaml` file. When Node OUs are enabled, any admin identity from this organization will be able to administer the orderer. + +Note that the local MSP contains the signed certificate (public key) and the private key for the orderer. The private key is used by the node to sign transactions, and is therefore not shared and must be secured. For maximum security, a Hardware Security Module (HSM) can be configured to generate and store this private key. + +### Create the ordering service genesis block + +The first channel that is created in a Fabric network is the "system" channel. The system channel defines the set of ordering nodes that form the ordering service and the set of organizations that serve as ordering service administrators. Peers transact on private "application" channels that are derived from the ordering service system channel, which also defines the "consortium" (the peer organizations known to the ordering service). Therefore, before you can deploy an ordering service, you need to generate the system channel configuration by creating the system channel "genesis block" using a tool called `configtxgen`. We'll then use the generated system channel genesis block to bootstrap each ordering node. + +#### Set up the `configtxgen` tool + +While it is possible to build the channel creation transaction file manually, it is easier to use the [configtxgen](../commands/configtxgen.html) tool, which works by reading a `configtx.yaml` file that defines the configuration of your channel and then writing the relevant information into a configuration block known as the "genesis block". + +Notice that the `configtxgen` tool is located in the `bin` folder of downloaded Fabric binaries. + +Before using `configtxgen`, confirm you have set the `FABRIC_CFG_PATH` environment variable to the path of the directory that contains your local copy of the `configtx.yaml` file. You can verify that are able to use the tool by printing the `configtxgen` help text: + +``` +configtxgen --help +``` + +#### The `configtx.yaml` file + +The `configtx.yaml` file is used to specify the **channel configuration** of the system channel and application channels. The information that is required to build the channel configuration is specified in a readable and editable form in the `configtx.yaml` file. The `configtxgen` tool uses the channel profiles that are defined in `configtx.yaml` to create the channel configuration block in the [protobuf format](https://developers.google.com/protocol-buffers). + +The `configtx.yaml` file is located in the `config` folder alongside the images that you downloaded and contains the following configuration sections that we need to create our new channel: + +- **Organizations:** The organizations that can become members of your channel. Each organization has a reference to the cryptographic material that is used to build the [channel MSP](../membership/membership.html). +- **Orderer:** Which ordering nodes will form the Raft consenter set of the channel. +- **Policies** Different sections of the file work together to define the channel policies that will govern how organizations interact with the channel and which organizations need to approve channel updates. For the purposes of this tutorial, we will use the defaults that are used by Fabric. For more information about policies, check out [Policies](../policies/policies.html). +- **Profiles** Each channel profile references information from other sections of the `configtx.yaml` file to build a channel configuration. The profiles are used to create the genesis block of the channel. + +The `configtxgen` tool uses `configtx.yaml` file to create the genesis block for the channel. A detailed version of the `configtx.yaml` file is available in the [Fabric sample configuration](https://github.com/hyperledger/fabric/blob/{BRANCH}/sampleconfig/configtx.yaml). Refer to the [Using configtx.yaml to build a channel configuration](../create_channel/create_channel_config.html) tutorial to learn more about the settings in this file. + +#### Generate the system channel genesis block + +The first channel that is created in a Fabric network is the system channel. The system channel defines the set of ordering nodes that form the ordering service and the set of organizations that serve as ordering service administrators. The system channel also includes the organizations that are members of blockchain [consortium](../glossary.html#consortium). The consortium is a set of peer organizations that belong to the system channel, but are not administrators of the ordering service. Consortium members have the ability to create new channels and include other consortium organizations as channel members. + +The genesis block of the system channel is required to deploy a new ordering service. A good example of a system channel profile can be found in the [test network configtx.yaml](https://github.com/hyperledger/fabric-samples/blob/master/test-network/configtx/configtx.yaml#L319) which includes the `TwoOrgsOrdererGenesis` profile as shown below: + +```yaml +TwoOrgsOrdererGenesis: + <<: *ChannelDefaults + Orderer: + <<: *OrdererDefaults + Organizations: + - *OrdererOrg + Capabilities: + <<: *OrdererCapabilities + Consortiums: + SampleConsortium: + Organizations: + - *Org1 + - *Org2 +``` + +The `Orderer:` section of the profile defines the Raft ordering service, with the `OrdererOrg` as the ordering service administrator. The `Consortiums` section of the profile creates a consortium of peer organizations named `SampleConsortium:`. For a production deployment, it is recommended that the peer and ordering nodes belong to separate organizations. This example uses peer organizations `Org1` and `Org2`. You will want to customize this section by providing your own consortium name and replacing `Org1` and `Org2` with the names of your peer organizations. If they are unknown at this time, you do not have to list any organizations under `Consortiums.SampleConsortium.Organizations`. Adding the peer organizations now saves the effort of a channel configuration update later. If you do add them, don't forget to define the peer organizations in the `Organizations:` section at the top of the `configtx.yaml` file as well. Notice this profile is missing an `Application:` section. You will need to create the application channels after you deploy the ordering service. + +After you have completed editing the `configtx.yaml` to reflect the orderer and peer organizations that will participate in your network, run the following command to create the genesis block of the system channel: +``` +configtxgen -profile TwoOrgsOrdererGenesis -channelID system-channel -outputBlock ./system-genesis-block/genesis.block +``` + +Where: +- `-profile` refers to the `TwoOrgsOrdererGenesis` profile in `configtx.yaml`. +- `-channelID` is the name of the system channel. In this tutorial, the system channel is named `system-channel`. +- `-outputBlock` refers to the location of the generated genesis block. + +When the command is successful, you will see logs of `configtxgen` loading the `configtx.yaml` file and printing a channel creation transaction: +``` +INFO 001 Loading configuration +INFO 002 Loaded configuration: /Usrs/fabric-samples/test-network/configtx/configtx.yaml +INFO 003 Generating new channel configtx +INFO 004 Generating genesis block +INFO 005 Creating system channel genesis block +INFO 006 Writing genesis block +``` + +Make note of the generated output block filename. This is your genesis block and will be referenced in the `orderer.yaml` file below. + +### Storage + +You must provision persistent storage for your ledgers. The default location for the ledger is located at `/var/hyperledger/production/orderer`. Ensure that your orderer has write access to the folder. If you choose to use a different location, provide that path in the `FileLedger:` parameter in the `orderer.yaml` file. If you decide to use Kubernetes or Docker, recall that in a containerized environment, local storage disappears when the container goes away, so you will need to provision or mount persistent storage for the ledger before you deploy an orderer. + +### Configuration of `orderer.yaml` + +Now you can use the [Checklist for a production orderer](./ordererchecklist.html) to modify the default settings in the `orderer.yaml` file. In the future, if you decide to deploy the orderer through Kubernetes or Docker, you can override the same default settings by using environment variables instead. Check out the [note](../deployment_guide_overview.html#step-five-deploy-orderers-and-ordering-nodes) in the deployment guide overview for instructions on how to construct the environment variable names for an override. + +At a minimum, you need to configure the following parameters: +- `General.ListenAddress` - Hostname that the ordering node listens on. +- `General.ListenPort` - Port that the ordering node listens on. +- `General.TLS.Enabled: true` - Server-side TLS should be enabled in all production networks. +- `General.TLS.PrivateKey ` - Ordering node private key from TLS CA. +- `General.TLS.Certificate ` - Ordering node signed certificate (public key) from the TLS CA. +- `General.TLS.RootCAS` - This value should be unset. +- `General.BoostrapMethod:file` - Start ordering service with a system channel. +- `General.BootstrapFile` - Path to and name of the genesis block file for the ordering service system channel. +- `General.LocalMSPDir` - Path to the ordering node MSP folder. +- `General.LocalMSPID` - MSP ID of the ordering organization as specified in the channel configuration. +- `FileLedger.Location` - Location of the orderer ledger on the file system. + +## Start the orderer + +Make sure you have set the value of the `FABRIC_CFG_PATH` to be the location of the `orderer.yaml` file relative to where you are invoking the orderer binary. For example, if you run the orderer binary from the `fabric/bin` folder, it would point to the `/config` folder: + ``` + export FABRIC_CFG_PATH=../config + ``` + +After `orderer.yaml` has been configured and your deployment backend is ready, you can simply start the orderer node with the following command: + +``` +cd bin +./orderer start +``` + +When the orderer starts successfully, you should see a message similar to: + +``` +INFO 019 Starting orderer: +INFO 01a Beginning to serve requests +``` + +You have successfully started one node, you now need to repeat these steps to configure and start the other two orderers. When a majority of orderers are started, a Raft leader is elected. You should see something similar to the following output: +``` +INFO 01b Applied config change to add node 1, current nodes in channel: [1] channel=syschannel node=1 +INFO 01c Applied config change to add node 2, current nodes in channel: [1 2] channel=syschannel node=1 +INFO 01d Applied config change to add node 3, current nodes in channel: [1 2 3] channel=syschannel node=1 +INFO 01e raft.node: 1 elected leader 2 at term 11 channel=syschannel node=1 +INFO 01f Raft leader changed: 0 -> 2 channel=syschannel node=1 +``` + +## Next steps + +Your ordering service is started and ready to order transactions into blocks. Depending on your use case, you may need to add or remove orderers from the consenter set, or other organizations may want to contribute their own orderers to the cluster. See the topic on ordering service [reconfiguration](../raft_configuration.html#reconfiguration) for considerations and instructions. + +Finally, your system channel includes a consortium of peer organizations as defined in the `Organization` section of the channel configuration. This list of peer organizations are allowed to create channels on your ordering service. You need to use the `configtxgen` command and the `configtx.yaml` file to create an application channel. Refer to the [Creating a channel](../create_channel/create_channel.html#creating-an-application-channel) tutorial for more details. + +## Troubleshooting + +### When you start the orderer, it fails with the following error: +``` +ERRO 001 failed to parse config: Error reading configuration: Unsupported Config Type "" +``` + +**Solution:** + +Your `FABRIC_CFG_PATH` is not set. Run the following command to set it to the location of your `orderer.yaml` file. + +``` +export FABRIC_CFG_PATH= +``` + +### When you start the orderer, it fails with the following error: +``` +PANI 003 Failed to setup local msp with config: administrators must be declared when no admin ou classification is set +``` + +**Solution:** + +Your local MSP definition is missing the `config.yaml` file. Create the file and copy it into the local MSP `/msp` folder of orderer. See the [Fabric CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html#nodeous) documentation for more instructions. + + +### When you start the orderer, it fails with the following error: +``` +PANI 005 Failed validating bootstrap block: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: administrators must be declared when no admin ou classification is set +``` + +**Solution:** + +The system channel configuration is missing `config.yaml` file. If you are creating a new ordering service, the `MSPDir` referenced in `configtx.yaml` file is missing the `config.yaml` file. Follow instructions in the [Fabric CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html#nodeous) documentation to generate this file and then rerun `configtxgen` to regenerate the genesis block for the system channel. +``` +# MSPDir is the filesystem path which contains the MSP configuration. + MSPDir: ../config/organizations/ordererOrganizations/ordererOrg1.example.com/msp +``` +Before you restart the orderer, delete the existing channel ledger files that are stored in the `FileLedger.Location` setting of the `orderer.yaml` file. + + +### When you start the orderer, it fails with the following error: +``` +PANI 004 Failed validating bootstrap block: the block isn't a system channel block because it lacks ConsortiumsConfig +``` +**Solution:** + +Your channel configuration is missing the consortium definition. If you are starting a new ordering service, edit the `configtx.yaml` file `Profiles:` section and add the consortium definition: +``` +Consortiums: + SampleConsortium: + Organizations: +``` +The `Consortiums:` section is required but can be empty, as shown above, if the peer organizations are not yet known. Rerun `configtxgen` to regenerate the genesis block for the system channel and then before you start the orderer, delete the existing channel ledger files that are stored in the `FileLedger.Location` setting of the `orderer.yaml` file. + +### When you start the orderer, it fails with the following error: +``` +PANI 27c Failed creating a block puller: client certificate isn't in PEM format: channel=mychannel node=3 +``` + +**Solution:** + +Your `orderer.yaml` file is missing the `General.Cluster.ClientCertificate` and `General.Cluster.ClientPrivateKey` definitions. Provide the path to and filename of the public certificate (also known as a signed certificate) and private key generated by your TLS CA for the orderer in these two fields and restart the node. + +### When you start the orderer, it fails with the following error: +``` +ServerHandshake -> ERRO 025 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=192.168.1.134:52413 +``` + +**Solution:** + +This error can occur when the `tlscacerts` folder is missing from the orderer organization MSP folder specified in the channel configuration. Create the `tlscacerts` folder inside your MSP definition and insert the root certificate from your TLS CA (`ca-cert.pem`). Rerun `configtxgen` to regenerate the genesis block for the system channel so that the channel configuration includes this certificate. Before you start the orderer again, delete the existing channel ledger files that are stored in the `FileLedger.Location` setting of the `orderer.yaml` file. + + diff --git a/docs/source/deployorderer/ordererplan.md b/docs/source/deployorderer/ordererplan.md new file mode 100644 index 00000000000..23856543f75 --- /dev/null +++ b/docs/source/deployorderer/ordererplan.md @@ -0,0 +1,84 @@ +# Planning for an ordering service + +Audience: Architects, network operators, users setting up a production Fabric network who are familiar with Transport Layer Security (TLS), Public Key Infrastructure (PKI) and Membership Service Providers (MSPs). + +Check out the conceptual topic on [The Ordering Service](../orderer/ordering_service.html) for an overview on ordering service concepts, implementations, and the role an ordering service plays in a transaction. + +In a Hyperledger Fabric network, a node or collection of nodes together form what's called an "ordering service", which literally orders transactions into blocks, which peers will then validate and commit to their ledgers. This separates Fabric from other distributed blockchains, such as Ethereum and Bitcoin, in which this ordering is done by any and all nodes. + +Whereas Fabric networks that will only be used for testing and development purposes (such as our [test network](../test_network.html)) often feature an ordering service made up of only one node (these nodes are typically referred to as "orderers" or "ordering nodes"), production networks require a more robust deployment of at least three nodes. For this reason, our deployment guide will feature instructions on how to create a three-node ordering service. For more guidance on the number of nodes you should deploy, check out [Cluster considerations](#cluster-considerations). + +## Generate ordering node identities and Membership Service Providers (MSPs) + +Before proceeding with this topic, you should have reviewed the process for a Deploying a Certificate Authority (CA) for your organization in order to generate the identities and MSPs for the admins and ordering nodes in your organization. To learn how to use a CA to create these identities, check out [Registering and enrolling identities with a CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html). Note that the best practice is to register and enroll a separate node identity for each ordering node and to use distinct TLS certificates for each node. + +Note that the `cryptogen` tool should never be used to generate any identities in a production scenario. + +In this deployment guide, we’ll assume that all ordering nodes will be created and owned by the same orderer organization. However, it is possible for multiple organizations to contribute nodes to an ordering service, both during the creation of the ordering service and after the ordering service has been created. + +## Folder management + +While it is possible to bootstrap an ordering node using a number of folder structures for your MSPs and certificates, we do recommend the folder structure outlined in [Registering and enrolling identities with a CA](https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/deployguide/use_CA.html#decide-on-the-structure-of-your-folders-and-certificates) for the sake of consistency and repeatability. Although it is not required, these instructions will presume that you have used that folder structure. + +## Certificates from a non-Fabric CA + +While it is possible to use a non-Fabric CA to generate identities, this process requires that you manually construct the MSP folders the ordering service and its organization need. That process will not be covered here and will instead focus on using a Fabric CA to generate the identities and MSP folders for you. + +## Transport Layer Security (TLS) enablement + +To prevent “man in the middle” attacks and otherwise secure communications, the use of TLS is a requirement for any production network. Therefore, in addition to registering your ordering nodes identities with your organization CA, you will also need to create certificates for your ordering nodes with the TLS CA of your organization. These TLS certificates will be used by the ordering nodes when communicating with the network. + +## Creating the system channel genesis block + +Note: “consenters” refers to the nodes servicing a particular channel at a particular time. For each channel, the “consenters” may be a subset of the ordering nodes available in the system channel. + +Every ordering node must be bootstrapped with a configuration block from the system channel (either the system channel "genesis block" or a later configuration block). This guide will assume you are creating a new ordering service and will therefore bootstrap ordering nodes from a system channel genesis block. + +This “system channel” is a special channel run by the ordering service and contains, among other things, the list of peer organizations that are allowed to create application channels (this list is known as the “consortium”). Although this system channel cannot be joined by peers or peer organizations (and thus, no transactions other than configuration transactions can be made on it), it does contain many of the same configuration parameters that application channels contain. Because application channels inherit these configuration values by default unless they are changed during the channel creation process, take care when creating your system channel genesis block to keep the use case of your network in mind. + +If you’re creating an ordering service, you must create this system channel genesis block by specifying the necessary parameters in `configtx.yaml` and using the `configtxgen` tool to create the block. + +If you are adding a node to the system channel, the best practice is to bootstrap using the latest configuration block of the system channel. Similarly, an ordering node added to the consenter of an application channel will be boostrapped using the latest configuration block of that channel. + +Note that the `configtx.yaml` that is shipped with Fabric binaries is identical to the [sample `configtx.yaml` found here](https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml), and contains the same channel "profiles" that are used to specify particular desired policies and parameters (for example, it can be used to specify which ordering nodes that are consenters in the system channel will be used in an application channel). When creating a channel (whether for an orderer system channel or an application channel), you specify a particular profile by name in your channel creation command, and that profile, along with the other parameters specified in `configtx.yaml`, are used to build the configuration block. + +You will likely have to modify one of these profiles in order to create your system channel and to create your application channels (if nothing else, you are likely to have to modify the sample organization names). Note that to create a Raft ordering service, you will have to specify an `OrdererType` of `etcdraft`. + +Check out the [tutorial on creating a channel](../create_channel/create_channel.html#the-orderer-system-channel) for more information on how to create a system channel genesis block and application channels. + +### Creating profiles for application channels + +Both the system and all application channels are built using a `configtx.yaml` file. Therefore, when editing your `configtx.yaml` to create the genesis block for your system channel, you can also add profiles for any application channels that will be created on this network. However, note that while you can define any set of consenters for each channel, **every consenter added to an application channel must first be a part of the system channel**. You cannot specify a consenter that is not a part of the system channel. Also, it is not possible to control the leader of the consenter set. Leaders are chosen by the `etcdraft` protocol used by the ordering nodes. + +## Sizing your ordering node resources + +Because ordering nodes do not host a state database or chaincode, an ordering node will typically only have a single container associated with it. Like the “peer container” associated with the peer, this container encapsulates the ordering process that orders transactions into blocks for all channels on which the ordering node is a consenter (ordering nodes also validate actions in particular cases). The ordering node storage includes the blockchain for all of channels on which the node is a consenter. + +Note that, at a logical level, every “consenter set” for each channel is a separate ordering service, in which “alive” messages and other communications are duplicated. This affects the CPU and memory required for each node. Similarly, there is a direct relationship between the size of a consenter set and the amount of resources each node will need. This is because in a Raft ordering service, the nodes do not collaborate in ordering transactions. One node, a "leader" elected by the other nodes, performs all ordering and validation functions, and then replicates decisions to the other nodes. As a result, as consenter sets increase in size, there is more traffic and burden on the leader node and more communications across the consenter set. + +More on this in [Cluster considerations](#cluster-considerations). + +## Cluster considerations + +For more guidance on the number of nodes you should deploy, check out [Raft](../orderer/ordering_service.html#raft). + +Raft is a leader based protocol, where a single leader validates transactions, orders blocks, and replicates the data out to the followers. Raft works based on the concept of a quorum in which as long as a majority of the Raft nodes are online, the Raft cluster stays available. + +On the one hand, the more Raft nodes that are deployed, the more nodes can be lost while maintaining that a majority of the nodes are still available (unless a majority of nodes are available, the cluster will cease to process and create blocks). A five node cluster, for example, can tolerate two down nodes, while a seven node cluster can tolerate three down nodes. + +However, more nodes means a larger communication overhead, as the leader must communicate with all of the nodes in order for the ordering service to function properly. If a node thinks it has lost connection with the leader, even if this loss of communication is only due to a networking or processing delay, it is designed to trigger a leader election. Unnecessary leader elections only add to the communications overhead for the leader, progressively escalating the burden on the cluster. And because, each channel an ordering node participates in is, logically, a separate Raft instance, an orderer participating in 100 channels is actually doing 100x the work as an ordering node in a single channel. + +For these reasons, Raft clusters of more than a few dozen nodes begin to see noticeable performance degradation. Once clusters reach about 100 nodes, they begin having trouble maintaining quorum. The stage at which a deployment experiences issues is dependent on factors such as networking speeds and other resources available, and there are parameters such as the tick interval which can be used to mitigate the larger communications overhead. + +The optimal number of ordering nodes for your ordering service ultimately depends on your use case, your resources, and your topology. However, clusters of three, five, seven, or nine nodes, are the most popular, with no more than about 50 channels per orderer. + +## Storage considerations and monitoring + +The storage that should be allocated to an ordering node depends on factors such as the expected transaction throughput, the size of blocks, and number of channels the node will be joined to. Your needs will depend on your use case. However, the best practice is to monitor the storage available to your nodes closely. You may also decide to enable an autoscaler, which will allocate more resources to your node, if your infrastructure allows it. + +If the storage for an ordering node is exhausted you also have the option to deploy a new node with a larger storage allocation and allow it to sync with the relevant ledgers. If you have several ordering nodes available to use, ensure that each node is a consenter on approximately the same number of channels. + +In a production environment you should also monitor the CPU and memory allocated to an ordering node using widely available tooling. If you see an ordering node struggling to keep up (for example, it might be calling for leader elections when none is needed), it is a sign that you might need to increase its resource allocation. + + diff --git a/docs/source/orderer/ordering_service.md b/docs/source/orderer/ordering_service.md index 1a0d92d213a..4a3100c4a3e 100644 --- a/docs/source/orderer/ordering_service.md +++ b/docs/source/orderer/ordering_service.md @@ -55,7 +55,7 @@ peers then processs the configuration transactions in order to verify that the modifications approved by the orderer do indeed satisfy the policies defined in the channel. -## Orderer nodes and Identity +## Orderer nodes and identity Everything that interacts with a blockchain network, including peers, applications, admins, and orderers, acquires their organizational identity from @@ -243,7 +243,12 @@ majority of ordering nodes (what's known as a "quorum") remaining, Raft is said to be "crash fault tolerant" (CFT). In other words, if there are three nodes in a channel, it can withstand the loss of one node (leaving two remaining). If you have five nodes in a channel, you can lose two nodes (leaving three -remaining nodes). +remaining nodes). This feature of a Raft ordering service is a factor in the +establishment of a high availability strategy for your ordering service. Additionally, +in a production environment, you would want to spread these nodes across data +centers and even locations. For example, by putting one node in three different +data centers. That way, if a data center or entire location becomes unavailable, +the nodes in the other data centers continue to operate. From the perspective of the service they provide to a network or a channel, Raft and the existing Kafka-based ordering service (which we'll talk about later) are