diff --git a/doc/code_snippets/README.md b/doc/code_snippets/README.md index 23f5d9108c..3ee120f28b 100644 --- a/doc/code_snippets/README.md +++ b/doc/code_snippets/README.md @@ -1,25 +1,43 @@ # Tarantool code examples -The `doc/code_snippets` folder of a Tarantool documentation repository contains runnable code examples that show how to work with various Tarantool modules. Code from these examples is [referenced](#referencing-code-snippets) in corresponding documentation sections. +The `doc/code_snippets` folder of a Tarantool documentation repository contains runnable code examples that show how to work with Tarantool: + +- The [snippets](snippets) folder contains sample applications that demonstrate how to configure a Tarantool cluster. +- The [test](test) folder contains testable Lua examples that show how to work with various Tarantool modules. + +Code from these examples is [referenced](#referencing-code-snippets) in corresponding documentation sections. ## Prerequisites -First, install the [tt CLI utility](https://www.tarantool.io/en/doc/latest/reference/tooling/tt_cli/). -Then, go to the `doc/code_snippets` folder and install the following libraries: +- Install the [tt CLI utility](https://www.tarantool.io/en/doc/latest/reference/tooling/tt_cli/). +- To be able to run tests for samples from [test](test), go to the `doc/code_snippets` folder and install the following libraries: -- Install [luatest](https://github.com/tarantool/luatest): - ```shell - tt rocks install luatest - ``` + - [luatest](https://github.com/tarantool/luatest): + ```shell + tt rocks install luatest + ``` + + - [luarapidxml](https://github.com/tarantool/luarapidxml): + ```shell + tt rocks install luarapidxml + ``` -- Install [luarapidxml](https://github.com/tarantool/luarapidxml): - ```shell - tt rocks install luarapidxml - ``` +## Running + +### Running applications from 'snippets' + +To run applications placed in [snippets](snippets), follow these steps: + +1. Go to the directory containing samples for a specific feature, for example, [snippets/replication](snippets/replication). +2. To run applications placed in [instances.enabled](instances.enabled), execute the `tt start` command, for example: + + ```console + $ tt start auto_leader + ``` -## Running and testing examples +### Running and testing examples from 'test' To test all the examples, go to the `doc/code_snippets` folder and execute the `luatest` command: diff --git a/doc/code_snippets/snippets/config/tt.yaml b/doc/code_snippets/snippets/config/tt.yaml index 66f9f7f7e1..41a3915f50 100644 --- a/doc/code_snippets/snippets/config/tt.yaml +++ b/doc/code_snippets/snippets/config/tt.yaml @@ -1,63 +1,54 @@ -tt: - modules: - # Directory where the external modules are stored. - directory: "modules" +modules: + # Directory where the external modules are stored. + directory: modules - app: - # Directory that stores various instance runtime - # artifacts like console socket, PID file, etc. - run_dir: "var/run" +env: + # Restart instance on failure. + restart_on_failure: false - # Directory that stores log files. - log_dir: var/log + # Directory that stores binary files. + bin_dir: bin - # The maximum size in MB of the log file before it gets rotated. - log_maxsize: 100 + # Directory that stores Tarantool header files. + inc_dir: include - # The maximum number of days to retain old log files. - log_maxage: 8 + # Path to directory that stores all applications. + # The directory can also contain symbolic links to applications. + instances_enabled: instances.enabled - # The maximum number of old log files to retain. - log_maxbackups: 10 + # Tarantoolctl artifacts layout compatibility: if set to true tt will not create application + # sub-directories for control socket, pid files, log files, etc.. Data files (wal, vinyl, + # snap) and multi-instance applications are not affected by this option. + tarantoolctl_layout: false - # Restart instance on failure. - restart_on_failure: false +app: + # Directory that stores various instance runtime + # artifacts like console socket, PID file, etc. + run_dir: var/run - # Directory where write-ahead log (.xlog) files are stored. - wal_dir: "var/lib" + # Directory that stores log files. + log_dir: var/log - # Directory where memtx stores snapshot (.snap) files. - memtx_dir: "var/lib" + # Directory where write-ahead log (.xlog) files are stored. + wal_dir: var/lib - # Directory where vinyl files or subdirectories will be stored. - vinyl_dir: "var/lib" + # Directory where memtx stores snapshot (.snap) files. + memtx_dir: var/lib - # Directory that stores binary files. - bin_dir: "bin" + # Directory where vinyl files or subdirectories will be stored. + vinyl_dir: var/lib - # Directory that stores Tarantool header files. - inc_dir: "include" +# Path to file with credentials for downloading Tarantool Enterprise Edition. +# credential_path: /path/to/file +ee: + credential_path: - # Path to directory that stores all applications. - # The directory can also contain symbolic links to applications. - instances_enabled: "instances.enabled" +templates: + # The path to templates search directory. + - path: templates - # Tarantoolctl artifacts layout compatibility: if set to true tt will not create application - # sub-directories for control socket, pid files, log files, etc.. Data files (wal, vinyl, - # snap) and multi-instance applications are not affected by this option. - tarantoolctl_layout: false - - # Path to file with credentials for downloading Tarantool Enterprise Edition. - # credential_path: /path/to/file - ee: - credential_path: "" - - templates: - # The path to templates search directory. - - path: "templates" - - repo: - # Directory where local rocks files could be found. - rocks: "" - # Directory that stores installation files. - distfiles: "distfiles" +repo: + # Directory where local rocks files could be found. + rocks: + # Directory that stores installation files. + distfiles: distfiles diff --git a/doc/code_snippets/snippets/replication/README.md b/doc/code_snippets/snippets/replication/README.md new file mode 100644 index 0000000000..6298a1142a --- /dev/null +++ b/doc/code_snippets/snippets/replication/README.md @@ -0,0 +1,11 @@ +# Replication + +A sample application demonstrating various replication features. + +## Running + +To run applications placed in [instances.enabled](instances.enabled), go to the `replication` directory in the terminal and execute the `tt start` command, for example: + +```console +$ tt start auto_leader +``` diff --git a/doc/code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/config.yaml b/doc/code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/config.yaml new file mode 100644 index 0000000000..c6bdfda820 --- /dev/null +++ b/doc/code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/config.yaml @@ -0,0 +1,30 @@ +credentials: + users: + replicator: + password: 'topsecret' + roles: [replication] + +iproto: + advertise: + peer: replicator@ + +replication: + failover: election + +groups: + group001: + replicasets: + replicaset001: + replication: + bootstrap_strategy: config + bootstrap_leader: instance001 + instances: + instance001: + iproto: + listen: 127.0.0.1:3301 + instance002: + iproto: + listen: 127.0.0.1:3302 + instance003: + iproto: + listen: 127.0.0.1:3303 \ No newline at end of file diff --git a/doc/code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/instances.yml b/doc/code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/instances.yml new file mode 100644 index 0000000000..6c765b2e67 --- /dev/null +++ b/doc/code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/instances.yml @@ -0,0 +1,3 @@ +instance001: +instance002: +instance003: \ No newline at end of file diff --git a/doc/code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml b/doc/code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml index d4f315a1d4..3b30a4b6cb 100644 --- a/doc/code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml +++ b/doc/code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml @@ -3,9 +3,6 @@ credentials: replicator: password: 'topsecret' roles: [replication] - client: - password: 'secret' - roles: [super] iproto: advertise: diff --git a/doc/code_snippets/snippets/replication/instances.enabled/peers/config.yaml b/doc/code_snippets/snippets/replication/instances.enabled/peers/config.yaml new file mode 100644 index 0000000000..ac3ed89768 --- /dev/null +++ b/doc/code_snippets/snippets/replication/instances.enabled/peers/config.yaml @@ -0,0 +1,27 @@ +credentials: + users: + replicator: + password: 'topsecret' + roles: [replication] + +replication: + peers: + - replicator:topsecret@127.0.0.1:3301 + - replicator:topsecret@127.0.0.1:3302 + - replicator:topsecret@127.0.0.1:3303 + failover: election + +groups: + group001: + replicasets: + replicaset001: + instances: + instance001: + iproto: + listen: 127.0.0.1:3301 + instance002: + iproto: + listen: 127.0.0.1:3302 + instance003: + iproto: + listen: 127.0.0.1:3303 \ No newline at end of file diff --git a/doc/code_snippets/snippets/replication/instances.enabled/peers/instances.yml b/doc/code_snippets/snippets/replication/instances.enabled/peers/instances.yml new file mode 100644 index 0000000000..6c765b2e67 --- /dev/null +++ b/doc/code_snippets/snippets/replication/instances.enabled/peers/instances.yml @@ -0,0 +1,3 @@ +instance001: +instance002: +instance003: \ No newline at end of file diff --git a/doc/code_snippets/snippets/replication/tt.yaml b/doc/code_snippets/snippets/replication/tt.yaml index 66f9f7f7e1..41a3915f50 100644 --- a/doc/code_snippets/snippets/replication/tt.yaml +++ b/doc/code_snippets/snippets/replication/tt.yaml @@ -1,63 +1,54 @@ -tt: - modules: - # Directory where the external modules are stored. - directory: "modules" +modules: + # Directory where the external modules are stored. + directory: modules - app: - # Directory that stores various instance runtime - # artifacts like console socket, PID file, etc. - run_dir: "var/run" +env: + # Restart instance on failure. + restart_on_failure: false - # Directory that stores log files. - log_dir: var/log + # Directory that stores binary files. + bin_dir: bin - # The maximum size in MB of the log file before it gets rotated. - log_maxsize: 100 + # Directory that stores Tarantool header files. + inc_dir: include - # The maximum number of days to retain old log files. - log_maxage: 8 + # Path to directory that stores all applications. + # The directory can also contain symbolic links to applications. + instances_enabled: instances.enabled - # The maximum number of old log files to retain. - log_maxbackups: 10 + # Tarantoolctl artifacts layout compatibility: if set to true tt will not create application + # sub-directories for control socket, pid files, log files, etc.. Data files (wal, vinyl, + # snap) and multi-instance applications are not affected by this option. + tarantoolctl_layout: false - # Restart instance on failure. - restart_on_failure: false +app: + # Directory that stores various instance runtime + # artifacts like console socket, PID file, etc. + run_dir: var/run - # Directory where write-ahead log (.xlog) files are stored. - wal_dir: "var/lib" + # Directory that stores log files. + log_dir: var/log - # Directory where memtx stores snapshot (.snap) files. - memtx_dir: "var/lib" + # Directory where write-ahead log (.xlog) files are stored. + wal_dir: var/lib - # Directory where vinyl files or subdirectories will be stored. - vinyl_dir: "var/lib" + # Directory where memtx stores snapshot (.snap) files. + memtx_dir: var/lib - # Directory that stores binary files. - bin_dir: "bin" + # Directory where vinyl files or subdirectories will be stored. + vinyl_dir: var/lib - # Directory that stores Tarantool header files. - inc_dir: "include" +# Path to file with credentials for downloading Tarantool Enterprise Edition. +# credential_path: /path/to/file +ee: + credential_path: - # Path to directory that stores all applications. - # The directory can also contain symbolic links to applications. - instances_enabled: "instances.enabled" +templates: + # The path to templates search directory. + - path: templates - # Tarantoolctl artifacts layout compatibility: if set to true tt will not create application - # sub-directories for control socket, pid files, log files, etc.. Data files (wal, vinyl, - # snap) and multi-instance applications are not affected by this option. - tarantoolctl_layout: false - - # Path to file with credentials for downloading Tarantool Enterprise Edition. - # credential_path: /path/to/file - ee: - credential_path: "" - - templates: - # The path to templates search directory. - - path: "templates" - - repo: - # Directory where local rocks files could be found. - rocks: "" - # Directory that stores installation files. - distfiles: "distfiles" +repo: + # Directory where local rocks files could be found. + rocks: + # Directory that stores installation files. + distfiles: distfiles diff --git a/doc/code_snippets/snippets/sharding/README.md b/doc/code_snippets/snippets/sharding/README.md new file mode 100644 index 0000000000..bd192edf5c --- /dev/null +++ b/doc/code_snippets/snippets/sharding/README.md @@ -0,0 +1,61 @@ +# Sharded cluster + +A sample application demonstrating how to configure a [sharded](https://www.tarantool.io/en/doc/latest/concepts/sharding/) cluster. + +## Running + +To run the cluster, go to the `sharding` directory in the terminal and perform the following steps: + +1. Install `vshard`: + + ```console + $ tt rocks install vshard + ``` + +2. Run the cluster: + + ```console + $ tt start sharded_cluster + ``` + +3. Connect to the router: + + ```console + $ tt connect sharded_cluster:router-a-001 + ``` + +4. Insert test data: + + ```console + sharded_cluster:router-a-001> insert_data() + --- + ... + ``` + +5. Connect to storages in different replica sets to see how data is distributed across nodes: + + a. `storage-a-001`: + + ```console + sharded_cluster:storage-a-001> box.space.bands:select() + --- + - - [1, 614, 'Roxette', 1986] + - [2, 986, 'Scorpions', 1965] + - [5, 755, 'Pink Floyd', 1965] + - [7, 998, 'The Doors', 1965] + - [8, 762, 'Nirvana', 1987] + ... + ``` + + b. `storage-b-001`: + + ```console + sharded_cluster:storage-b-001> box.space.bands:select() + --- + - - [3, 11, 'Ace of Base', 1987] + - [4, 42, 'The Beatles', 1960] + - [6, 55, 'The Rolling Stones', 1962] + - [9, 299, 'Led Zeppelin', 1968] + - [10, 167, 'Queen', 1970] + ... + ``` diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml new file mode 100644 index 0000000000..4bb62dba8e --- /dev/null +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml @@ -0,0 +1,55 @@ +credentials: + users: + replicator: + password: 'topsecret' + roles: [replication] + storage: + password: 'secret' + roles: [super] + +iproto: + advertise: + peer: replicator@ + sharding: storage@ + +sharding: + bucket_count: 1000 + +groups: + storages: + app: + module: storage + sharding: + roles: [storage] + replication: + failover: manual + replicasets: + storage-a: + leader: storage-a-001 + instances: + storage-a-001: + iproto: + listen: 127.0.0.1:3301 + storage-a-002: + iproto: + listen: 127.0.0.1:3302 + storage-b: + leader: storage-b-002 + instances: + storage-b-001: + iproto: + listen: 127.0.0.1:3303 + storage-b-002: + iproto: + listen: 127.0.0.1:3304 + routers: + app: + module: router + sharding: + roles: [router] + replicasets: + router-a: + instances: + router-a-001: + iproto: + listen: 127.0.0.1:3300 \ No newline at end of file diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml new file mode 100644 index 0000000000..368bc16cb6 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml @@ -0,0 +1,5 @@ +storage-a-001: +storage-a-002: +storage-b-001: +storage-b-002: +router-a-001: \ No newline at end of file diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua new file mode 100644 index 0000000000..bc4e849af5 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua @@ -0,0 +1,26 @@ +local vshard = require('vshard') + +vshard.router.bootstrap() + +function put(id, band_name, year) + local bucket_id = vshard.router.bucket_id_mpcrc32({ id }) + vshard.router.callrw(bucket_id, 'put', { id, bucket_id, band_name, year }) +end + +function get(id) + local bucket_id = vshard.router.bucket_id_mpcrc32({ id }) + return vshard.router.callro(bucket_id, 'get', { id }) +end + +function insert_data() + put(1, 'Roxette', 1986) + put(2, 'Scorpions', 1965) + put(3, 'Ace of Base', 1987) + put(4, 'The Beatles', 1960) + put(5, 'Pink Floyd', 1965) + put(6, 'The Rolling Stones', 1962) + put(7, 'The Doors', 1965) + put(8, 'Nirvana', 1987) + put(9, 'Led Zeppelin', 1968) + put(10, 'Queen', 1970) +end diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua new file mode 100644 index 0000000000..fb9a932349 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua @@ -0,0 +1,23 @@ +box.schema.create_space('bands', { + format = { + { name = 'id', type = 'unsigned' }, + { name = 'bucket_id', type = 'unsigned' }, + { name = 'band_name', type = 'string' }, + { name = 'year', type = 'unsigned' } + }, + if_not_exists = true +}) +box.space.bands:create_index('id', { parts = { 'id' }, if_not_exists = true }) +box.space.bands:create_index('bucket_id', { parts = { 'id' }, unique = false, if_not_exists = true }) + +function put(id, bucket_id, band_name, year) + box.space.bands:insert({ id, bucket_id, band_name, year }) +end + +function get(id) + local tuple = box.space.bands:get(id) + if tuple == nil then + return nil + end + return { tuple.id, tuple.band_name, tuple.year } +end diff --git a/doc/code_snippets/snippets/sharding/tt.yaml b/doc/code_snippets/snippets/sharding/tt.yaml new file mode 100644 index 0000000000..41a3915f50 --- /dev/null +++ b/doc/code_snippets/snippets/sharding/tt.yaml @@ -0,0 +1,54 @@ +modules: + # Directory where the external modules are stored. + directory: modules + +env: + # Restart instance on failure. + restart_on_failure: false + + # Directory that stores binary files. + bin_dir: bin + + # Directory that stores Tarantool header files. + inc_dir: include + + # Path to directory that stores all applications. + # The directory can also contain symbolic links to applications. + instances_enabled: instances.enabled + + # Tarantoolctl artifacts layout compatibility: if set to true tt will not create application + # sub-directories for control socket, pid files, log files, etc.. Data files (wal, vinyl, + # snap) and multi-instance applications are not affected by this option. + tarantoolctl_layout: false + +app: + # Directory that stores various instance runtime + # artifacts like console socket, PID file, etc. + run_dir: var/run + + # Directory that stores log files. + log_dir: var/log + + # Directory where write-ahead log (.xlog) files are stored. + wal_dir: var/lib + + # Directory where memtx stores snapshot (.snap) files. + memtx_dir: var/lib + + # Directory where vinyl files or subdirectories will be stored. + vinyl_dir: var/lib + +# Path to file with credentials for downloading Tarantool Enterprise Edition. +# credential_path: /path/to/file +ee: + credential_path: + +templates: + # The path to templates search directory. + - path: templates + +repo: + # Directory where local rocks files could be found. + rocks: + # Directory that stores installation files. + distfiles: distfiles diff --git a/doc/concepts/configuration.rst b/doc/concepts/configuration.rst index 2ad7e15c20..9f59a65d4b 100644 --- a/doc/concepts/configuration.rst +++ b/doc/concepts/configuration.rst @@ -150,7 +150,7 @@ You can learn more about configuring replication from :ref:`Replication tutorial - ``credentials`` (*global*) - This section is used to create the *replicator* and *client* users and assign them the specified roles. + This section is used to create the *replicator* user and assign it the specified role. These options are applied globally to all instances. - ``iproto`` (*global*, *instance*) @@ -423,11 +423,12 @@ The ``credentials`` section allows you to create users and grant them the specif In the example below, there are two users: * The *replicator* user is used for replication and has a corresponding role. -* The *client* user has the ``super`` role and can perform any action on Tarantool instances. +* The *storage* user has the ``super`` role and can perform any action on Tarantool instances. -.. literalinclude:: /code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml :language: yaml - :lines: 1-8 + :start-at: credentials: + :end-at: roles: [super] :dedent: To learn more, see the :ref:`Access control ` section. diff --git a/doc/concepts/configuration/configuration_etcd.rst b/doc/concepts/configuration/configuration_etcd.rst index 37ad8dcbe3..664afb7545 100644 --- a/doc/concepts/configuration/configuration_etcd.rst +++ b/doc/concepts/configuration/configuration_etcd.rst @@ -99,6 +99,10 @@ To publish a cluster's configuration using the ``etcdctl`` utility, use the ``pu $ etcdctl put /example/config/all < cluster.yaml +.. NOTE:: + + For etcd versions earlier than 3.4, you need to set the ``ETCDCTL_API`` environment variable to ``3``. + diff --git a/doc/reference/configuration/configuration_reference.rst b/doc/reference/configuration/configuration_reference.rst index 2fa2ad79f6..e5e04f1eb2 100644 --- a/doc/reference/configuration/configuration_reference.rst +++ b/doc/reference/configuration/configuration_reference.rst @@ -3,13 +3,26 @@ Configuration reference ======================= +.. TODO + https://github.com/tarantool/doc/issues/3664 + This topic describes all :ref:`configuration parameters ` provided by Tarantool. +Most of the configuration options described in this reference can be applied to a specific instance, replica set, group, or to all instances globally. +To do so, you need to define the required option at the :ref:`specified level `. + + .. _configuration_reference_config: config ------ +The ``config`` section defines various parameters related to centralized configuration. + +.. NOTE:: + + ``config`` can be defined in the global :ref:`scope ` only. + * :ref:`config.reload ` * :ref:`config.version ` * :ref:`config.etcd.* ` @@ -28,6 +41,7 @@ config See also: :ref:`Reloading configuration `. + | | Type: string | Possible values: 'auto', 'manual' | Default: 'auto' @@ -42,6 +56,7 @@ config A configuration version. + | | Type: string | Default: nil | Environment variable: TT_CONFIG_VERSION @@ -50,8 +65,8 @@ config .. _configuration_reference_config_etcd: -etcd -~~~~ +config.etcd.* +~~~~~~~~~~~~~ .. include:: /concepts/configuration/configuration_etcd.rst :start-after: ee_note_etcd_start @@ -83,6 +98,7 @@ This section describes options related to :ref:`storing configuration in etcd `. + | | Type: array | Default: nil | Environment variable: TT_CONFIG_ETCD_ENDPOINTS @@ -100,6 +116,7 @@ This section describes options related to :ref:`storing configuration in etcd `. + | | Type: string | Default: nil | Environment variable: TT_CONFIG_ETCD_PREFIX @@ -112,6 +129,7 @@ This section describes options related to :ref:`storing configuration in etcd `. + + +- :ref:`credentials.roles.* ` +- :ref:`credentials.users.* ` +- :ref:`.privileges.* ` + + +.. _configuration_reference_credentials_roles: + +.. confval:: credentials.roles + + | Type: map + | Default: nil + | Environment variable: TT_CREDENTIALS_ROLES + + +.. _configuration_reference_credentials_users: + +.. confval:: credentials.users + + | Type: map + | Default: nil + | Environment variable: TT_CREDENTIALS_USERS + + + +.. _configuration_reference_credentials_role: + +credentials.roles.* +~~~~~~~~~~~~~~~~~~~ + +.. _configuration_reference_credentials_roles_name_roles: + +.. confval:: credentials.roles..roles + + +.. _configuration_reference_credentials_roles_name_privileges: + +.. confval:: credentials.roles..privileges + + See :ref:`privileges `. + + +.. _configuration_reference_credentials_user: + +credentials.users.* +~~~~~~~~~~~~~~~~~~~ + + +.. _configuration_reference_credentials_users_name_password: + +.. confval:: credentials.users..password + + +.. _configuration_reference_credentials_users_name_roles: + +.. confval:: credentials.users..roles + + +.. _configuration_reference_credentials_users_name_privileges: + +.. confval:: credentials.users..privileges + + See :ref:`privileges `. + + +.. _configuration_reference_credentials_privileges: + +.privileges.* +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. _configuration_reference_credentials_users_name_privileges_permissions: + +.. confval:: .privileges.permissions + + +.. _configuration_reference_credentials_users_name_privileges_spaces: + +.. confval:: .privileges.spaces + + +.. _configuration_reference_credentials_users_name_privileges_functions: + +.. confval:: .privileges.functions + + +.. _configuration_reference_credentials_users_name_privileges_sequences: + +.. confval:: .privileges.sequences + + +.. _configuration_reference_credentials_users_name_privileges_lua_eval: + +.. confval:: .privileges.lua_eval + + +.. _configuration_reference_credentials_users_name_privileges_lua_call: + +.. confval:: .privileges.lua_call + + +.. _configuration_reference_credentials_users_name_privileges_sql: + +.. confval:: .privileges.sql + + + + +.. _configuration_reference_database: + +database +-------- + +The ``database`` section defines database-specific configuration parameters, such as an instance's read-write mode or transaction isolation level. + +.. NOTE:: + + ``database`` can be defined in any :ref:`scope `. + +- :ref:`database.hot_standby ` +- :ref:`database.instance_uuid ` +- :ref:`database.mode ` +- :ref:`database.replicaset_uuid ` +- :ref:`database.txn_isolation ` +- :ref:`database.txn_timeout ` +- :ref:`database.use_mvcc_engine ` + +.. _configuration_reference_database_hot_standby: + +.. confval:: database.hot_standby + + | Type: boolean + | Default: false + | Environment variable: TT_DATABASE_HOT_STANDBY + + +.. _configuration_reference_database_instance_uuid: + +.. confval:: database.instance_uuid + + An :ref:`instance UUID `. + + By default, instance UUIDs are generated automatically. + ``database.instance_uuid`` can be used to specify an instance identifier manually. + + UUIDs should follow these rules: + + * The values must be true unique identifiers, not shared by other instances + or replica sets within the common infrastructure. + + * The values must be used consistently, not changed after the initial setup. + The initial values are stored in :ref:`snapshot files ` + and are checked whenever the system is restarted. + + .. TODO: https://github.com/tarantool/doc/issues/3661 mention that UUIDs can be dropped during migration. + + * The values must comply with `RFC 4122 `_. + The `nil UUID `_ is not allowed. + + See also: :ref:`database.replicaset_uuid ` + + | + | Type: string + | Default: :ref:`box.NULL ` + | Environment variable: TT_DATABASE_INSTANCE_UUID + + +.. _configuration_reference_database_mode: + +.. confval:: database.mode + + An instance's operating mode. + This option is in effect if :ref:`replication.failover ` is set to ``off``. + + The following modes are available: + + - ``rw``: an instance is in read-write mode. + - ``ro``: an instance is in read-only mode. + + If not specified explicitly, the default value depends on the number of instances in a replica set. For a single instance, the ``rw`` mode is used, while for multiple instances, the ``ro`` mode is used. + + **Example** + + You can set the ``database.mode`` option to ``rw`` on all instances in a replica set to make a :ref:`master-master ` configuration. + In this case, ``replication.failover`` should be set to ``off``. + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/master_master/config.yaml + :language: yaml + :dedent: + + | Type: string + | Default: :ref:`box.NULL ` (the actual default value depends on the number of instances in a replica set) + | Environment variable: TT_DATABASE_MODE + + +.. _configuration_reference_database_replicaset_uuid: + +.. confval:: database.replicaset_uuid + + A :ref:`replica set UUID `. + + By default, replica set UUIDs are generated automatically. + ``database.replicaset_uuid`` can be used to specify a replica set identifier manually. + + See also: :ref:`database.instance_uuid ` + + | + | Type: string + | Default: :ref:`box.NULL ` + | Environment variable: TT_DATABASE_REPLICASET_UUID + + +.. _configuration_reference_database_txn_isolation: + +.. confval:: database.txn_isolation + + A transaction :ref:`isolation level `. + + | + | Type: string + | Default: ``best-effort`` + | Possible values: ``best-effort``, ``read-committed``, ``read-confirmed`` + | Environment variable: TT_DATABASE_TXN_ISOLATION + + +.. _configuration_reference_database_txn_timeout: + +.. confval:: database.txn_timeout + + A timeout (in seconds) after which the transaction is rolled back. + + See also: :ref:`box.begin() ` + + | + | Type: number + | Default: 3153600000 (~100 years) + | Environment variable: TT_DATABASE_TXN_TIMEOUT + + +.. _configuration_reference_database_use_mvcc_engine: + +.. confval:: database.use_mvcc_engine + + Whether the :ref:`transactional manager ` is enabled. + + | + | Type: boolean + | Default: false + | Environment variable: TT_DATABASE_USE_MVCC_ENGINE + + + + + +.. _configuration_reference_iproto: + +iproto +------ + +The ``iproto`` section is used to configure parameters related to communicating to and between cluster instances. + +.. NOTE:: + + ``iproto`` can be defined in any :ref:`scope `. + + +- :ref:`iproto.advertise.client ` +- :ref:`iproto.advertise.peer ` +- :ref:`iproto.advertise.sharding ` +- :ref:`iproto.listen ` +- :ref:`iproto.net_msg_max ` +- :ref:`iproto.readahead ` +- :ref:`iproto.threads ` + + +.. _configuration_reference_iproto_advertise_client: + +.. confval:: iproto.advertise.client + + An URI used to advertise the current instance to clients. + + The ``iproto.advertise.client`` option accepts an URI in the following formats: + + - An address: ``host:port``. + + - A Unix domain socket: ``unix/:``. + + Note that this option doesn't allow to set a username and password. + If a remote client needs this information, it should be delivered outside of the cluster configuration. + + .. host_port_limitations_start + + .. NOTE:: + + The ``host`` value cannot be ``0.0.0.0``/``[::]`` and the ``port`` value cannot be ``0``. + + .. host_port_limitations_end + + | + | Type: string + | Default: :ref:`box.NULL ` + | Environment variable: TT_IPROTO_ADVERTISE_CLIENT + +.. _configuration_reference_iproto_advertise_peer: + +.. confval:: iproto.advertise.peer + + An URI used to advertise the current instance to other cluster members. + + The ``iproto.advertise.peer`` option accepts an URI in the following formats: + + - User :ref:`credentials ` and an address: ``username@host:port`` or ``username:password@host:port``. + + - User credentials: ``username@`` or ``username:password@``. + In this case, an advertise address is taken from :ref:`iproto.listen `. + + - An address: ``host:port``. + + If ``password`` is missing, it is taken from :ref:`credentials ` for the specified ``username``. + + You can also use a Unix domain socket (``unix/:``) instead of ``host:port``. + + .. include:: /reference/configuration/configuration_reference.rst + :start-after: host_port_limitations_start + :end-before: host_port_limitations_end + + **Example** + + In the example below, the following configuration options are specified: + + - In the :ref:`credentials ` section, the ``replicator`` user with the ``replication`` role is created. + - ``iproto.advertise.peer`` specifies that other instances should connect to an address defined in :ref:`iproto.listen ` using the ``replicator`` user. + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/auto_leader/config.yaml + :language: yaml + :start-at: credentials: + :end-at: listen: 127.0.0.1:3303 + :dedent: + + | Type: string + | Default: :ref:`box.NULL ` + | Environment variable: TT_IPROTO_ADVERTISE_PEER + +.. _configuration_reference_iproto_advertise_sharding: + +.. confval:: iproto.advertise.sharding + + An advertise URI used by a router and rebalancer. + + The ``iproto.advertise.sharding`` option accepts an URI in the same formats as :ref:`iproto.advertise.peer `. + + **Example** + + In the example below, the following configuration options are specified: + + - In the :ref:`credentials ` section, the ``replicator`` and ``storage`` users are created. + - ``iproto.advertise.peer`` specifies that other instances should connect to an address defined in :ref:`iproto.listen ` with the ``replicator`` user. + - ``iproto.advertise.sharding`` specifies that a router should connect to storages using an address defined in :ref:`iproto.listen ` with the ``storage`` user. + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :start-at: credentials: + :end-at: sharding: storage@ + :dedent: + + | Type: string + | Default: :ref:`box.NULL ` + | Environment variable: TT_IPROTO_ADVERTISE_SHARDING + + +.. _configuration_reference_iproto_listen: + +.. confval:: iproto.listen + + An address used to listen for incoming requests. + This address is used for different purposes, for example: + + - Communicating between replica set peers or cluster members. + - Remote administration using :ref:`tt connect `. + - Connecting to an instance using :ref:`connectors ` for different languages. + + To grant the specified privileges for connecting to an instance, use the :ref:`credentials ` configuration section. + + **Example** + + In the example below, ``iproto.listen`` is set explicitly for each instance in a cluster: + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/auto_leader/config.yaml + :language: yaml + :start-at: groups: + :end-before: Load sample data + :dedent: + + See also: :ref:`Connection settings `. + + | + | Type: string + | Default: :ref:`box.NULL ` + | Environment variable: TT_IPROTO_LISTEN + + +.. _configuration_reference_iproto_net_msg_max: + +.. confval:: iproto.net_msg_max + + To handle messages, Tarantool allocates :ref:`fibers `. + To prevent fiber overhead from affecting the whole system, + Tarantool restricts how many messages the fibers handle, + so that some pending requests are blocked. + + - On powerful systems, increase ``net_msg_max``, and the scheduler + starts processing pending requests immediately. + + - On weaker systems, decrease ``net_msg_max``, and the overhead + may decrease. Although this may take some time because the + scheduler must wait until already-running requests finish. + + When ``net_msg_max`` is reached, + Tarantool suspends processing of incoming packages until it + has processed earlier messages. This is not a direct restriction of + the number of fibers that handle network messages, rather it + is a system-wide restriction of channel bandwidth. + This in turn restricts the number of incoming + network messages that the + :ref:`transaction processor thread ` + handles, and therefore indirectly affects the fibers that handle + network messages. + + .. NOTE:: + + The number of fibers is smaller than the number of messages because + messages can be released as soon as they are delivered, while + incoming requests might not be processed until some time after delivery. + + | Type: integer + | Default: 768 + | Environment variable: TT_IPROTO_NET_MSG_MAX + + +.. _configuration_reference_iproto_readahead: + +.. confval:: iproto.readahead + + The size of the read-ahead buffer associated with a client connection. + The larger the buffer, the more memory an active connection consumes, and the + more requests can be read from the operating system buffer in a single + system call. + + The recommendation is to make sure that the buffer can contain at least a few dozen requests. + Therefore, if a typical tuple in a request is large, e.g. a few kilobytes or even megabytes, the read-ahead buffer size should be increased. + If batched request processing is not used, it’s prudent to leave this setting at its default. + + | + | Type: integer + | Default: 16320 + | Environment variable: TT_IPROTO_READAHEAD + + +.. _configuration_reference_iproto_threads: + +.. confval:: iproto.threads + + The number of :ref:`network threads `. + There can be unusual workloads where the network thread + is 100% loaded and the transaction processor thread is not, so the network + thread is a bottleneck. + In that case, set ``iproto_threads`` to 2 or more. + The operating system kernel determines which connection goes to + which thread. + + | + | Type: integer + | Default: 1 + | Environment variable: TT_IPROTO_THREADS + + + + + +.. _configuration_reference_groups: + +groups +------ + +The ``groups`` section provides the ability to define the :ref:`full topology of a Tarantool cluster `. + +.. NOTE:: + + ``groups`` can be defined in the global :ref:`scope ` only. + +- :ref:`groups.\ ` +- :ref:`groups.\.replicasets ` +- :ref:`groups.\.\ ` + +.. _configuration_reference_groups_name: + +.. confval:: groups. + + A group name. + + +.. _configuration_reference_groups_name_replicasets: + +.. confval:: groups..replicasets + + Replica sets that belong to this group. See :ref:`replicasets `. + + +.. _configuration_reference_groups_name_config_parameter: + +.. confval:: groups.. + + Any configuration parameter that can be defined in the group :ref:`scope `. + For example, :ref:`iproto ` and :ref:`database ` configuration parameters defined at the group level are applied to all instances in this group. + + + +.. _configuration_reference_replicasets: + +replicasets +~~~~~~~~~~~ + +.. NOTE:: + + ``replicasets`` can be defined in the group :ref:`scope ` only. + +- :ref:`replicasets.\ ` +- :ref:`replicasets.\.leader ` +- :ref:`replicasets.\.bootstrap_leader ` +- :ref:`replicasets.\.instances ` +- :ref:`replicasets.\.\ ` + +.. _configuration_reference_replicasets_name: + +.. confval:: replicasets. + + A replica set name. + + +.. _configuration_reference_replicasets_name_leader: + +.. confval:: replicasets..leader + + A replica set leader. + This option can be used to set a replica set leader when ``manual`` :ref:`replication.failover ` is used. + + To perform :ref:`controlled failover `, ``.leader`` can be temporarily removed or set to ``null``. + + **Example** + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml + :language: yaml + :start-at: replication: + :end-at: listen: 127.0.0.1:3303 + :dedent: + + +.. _configuration_reference_replicasets_name_bootstrap_leader: + +.. confval:: replicasets..bootstrap_leader + + A bootstrap leader for a replica set. + To specify a bootstrap leader manually, you need to set :ref:`replication.bootstrap_strategy ` to ``config``. + + **Example** + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/bootstrap_strategy/config.yaml + :language: yaml + :start-at: groups: + :end-at: listen: 127.0.0.1:3303 + :dedent: + + +.. _configuration_reference_replicasets_name_instances: + +.. confval:: replicasets..instances + + Instances that belong to this replica set. See :ref:`instances `. + + +.. _configuration_reference_replicasets_name_config_parameter: + +.. confval:: replicasets.. + + Any configuration parameter that can be defined in the replica set :ref:`scope `. + For example, :ref:`iproto ` and :ref:`database ` configuration parameters defined at the replica set level are applied to all instances in this replica set. + + + +.. _configuration_reference_instances: + +instances +********* + +.. NOTE:: + + ``instances`` can be defined in the replica set :ref:`scope ` only. + +- :ref:`instances.\ ` +- :ref:`instances.\.\ ` + +.. _configuration_reference_instances_name: + +.. confval:: instances. + + An instance name. + + +.. _configuration_reference_instances_name_config_parameter: + +.. confval:: instances.. + + Any configuration parameter that can be defined in the instance :ref:`scope `. + For example, :ref:`iproto ` and :ref:`database ` configuration parameters defined at the instance level are applied to this instance only. + + + + + + + + + + + + + + +.. _configuration_reference_replication: + +replication +----------- + +The ``replication`` section defines configuration parameters related to :ref:`replication `. + +- :ref:`replication.anon ` +- :ref:`replication.bootstrap_strategy ` +- :ref:`replication.connect_timeout ` +- :ref:`replication.election_mode ` +- :ref:`replication.election_timeout ` +- :ref:`replication.election_fencing_mode ` +- :ref:`replication.failover ` +- :ref:`replication.peers ` +- :ref:`replication.skip_conflict ` +- :ref:`replication.sync_lag ` +- :ref:`replication.sync_timeout ` +- :ref:`replication.synchro_quorum ` +- :ref:`replication.synchro_timeout ` +- :ref:`replication.threads ` +- :ref:`replication.timeout ` + + +.. _configuration_reference_replication_anon: + +.. confval:: replication.anon + + | Type: boolean + | Default: ``false`` + | Environment variable: TT_REPLICATION_ANON + + +.. _configuration_reference_replication_bootstrap_strategy: + +.. confval:: replication.bootstrap_strategy + + Specifies a strategy used to bootstrap a :ref:`replica set `. + The following strategies are available: + + * ``auto``: a node doesn't boot if half or more of the other nodes in a replica set are not connected. + For example, if a replica set contains 2 or 3 nodes, a node requires 2 connected instances. + In the case of 4 or 5 nodes, at least 3 connected instances are required. + Moreover, a bootstrap leader fails to boot unless every connected node has chosen it as a bootstrap leader. + + * ``config``: use the specified node to bootstrap a replica set. + To specify the bootstrap leader, use the :ref:`.bootstrap_leader ` option. + + * ``supervised``: a bootstrap leader isn't chosen automatically but should be appointed using ``box.ctl.make_bootstrap_leader()`` on the desired node. + + Configuration fails if no bootstrap leader is appointed during a :ref:`replication.connect_timeout `. + + * ``legacy`` (deprecated since :doc:`2.11.0 `): a node requires the :ref:`replication_connect_quorum ` number of other nodes to be connected. + This option is added to keep the compatibility with the current versions of Cartridge and might be removed in the future. + + | Type: string + | Default: ``auto`` + | Environment variable: TT_REPLICATION_BOOTSTRAP_STRATEGY + + +.. _configuration_reference_replication_connect_timeout: + +.. confval:: replication.connect_timeout + + A timeout (in seconds) a replica waits when trying to connect to a master in a cluster. + See :ref:`orphan status ` for details. + + This parameter is different from + :ref:`replication.timeout `, + which a master uses to disconnect a replica when the master + receives no acknowledgments of heartbeat messages. + + | + | Type: number + | Default: 30 + | Environment variable: TT_REPLICATION_CONNECT_TIMEOUT + + +.. _configuration_reference_replication_election_mode: + +.. confval:: replication.election_mode + + A role of a replica set node in the :ref:`leader election process `. + + The possible values are: + + * ``off``: a node doesn't participate in the election activities. + + * ``voter``: a node can participate in the election process but can't be a leader. + + * ``candidate``: a node should be able to become a leader. + + * ``manual``: allow to control which instance is the leader explicitly instead of relying on automated leader election. + By default, the instance acts like a voter -- it is read-only and may vote for other candidate instances. + Once :ref:`box.ctl.promote() ` is called, the instance becomes a candidate and starts a new election round. + If the instance wins the elections, it becomes a leader but won't participate in any new elections. + + | + | Type: string + | Default: :ref:`box.NULL ` (the actual default value depends on :ref:`replication.failover `) + | Environment variable: TT_REPLICATION_ELECTION_MODE + + +.. _configuration_reference_replication_election_timeout: + +.. confval:: replication.election_timeout + + Specifies the timeout (in seconds) between election rounds in the + :ref:`leader election process ` if the previous round + ended up with a split vote. + + It is quite big, and for most of the cases, it can be lowered to + 300-400 ms. + + To avoid the split vote repeat, the timeout is randomized on each node + during every new election, from 100% to 110% of the original timeout value. + For example, if the timeout is 300 ms and there are 3 nodes started + the election simultaneously in the same term, + they can set their election timeouts to 300, 310, and 320 respectively, + or to 305, 302, and 324, and so on. In that way, the votes will never be split + because the election on different nodes won't be restarted simultaneously. + + | + | Type: number + | Default: 5 + | Environment variable: TT_REPLICATION_ELECTION_TIMEOUT + + +.. _configuration_reference_replication_election_fencing_mode: + +.. confval:: replication.election_fencing_mode + + Specifies the :ref:`leader fencing mode ` that + affects the leader election process. When the parameter is set to ``soft`` + or ``strict``, the leader resigns its leadership if it has less than + :ref:`replication.synchro_quorum ` + of alive connections to the cluster nodes. + The resigning leader receives the status of a follower in the current election term and becomes + read-only. + + * In ``soft`` mode, a connection is considered dead if there are no responses for + :ref:`4 * replication.timeout ` seconds both on the current leader and the followers. + + * In ``strict`` mode, a connection is considered dead if there are no responses + for :ref:`2 * replication.timeout ` seconds on the + current leader and + :ref:`4 * replication.timeout ` seconds on the + followers. This improves the chances that there is only one leader at any time. + + Fencing applies to the instances that have the + :ref:`replication.election_mode ` set to ``candidate`` or ``manual``. + To turn off leader fencing, set ``election_fencing_mode`` to ``off``. + + | + | Type: string + | Default: ``soft`` + | Possible values: ``off``, ``soft``, ``strict`` + | Environment variable: TT_REPLICATION_ELECTION_FENCING_MODE + + +.. _configuration_reference_replication_failover: + +.. confval:: replication.failover + + A failover mode used to take over a master role when the current master instance fails. + The following modes are available: + + - ``off`` + + Leadership in a replica set is controlled using the :ref:`database.mode ` option. + In this case, you can set the ``database.mode`` option to ``rw`` on all instances in a replica set to make a :ref:`master-master ` configuration. + + The default ``database.mode`` is determined as follows: ``rw`` if there is one instance in a replica set; ``ro`` if there are several instances. + + - ``manual`` + + Leadership in a replica set is controlled using the :ref:`.leader ` option. + In this case, a :ref:`master-master ` configuration is forbidden. + + In the ``manual`` mode, the :ref:`database.mode ` option cannot be set explicitly. + The leader is configured in the read-write mode, all the other instances are read-only. + + - ``election`` + + :ref:`Automated leader election ` is used to control leadership in a replica set. + + In the ``election`` mode, :ref:`database.mode ` and :ref:`.leader ` shouldn't be set explicitly. + + - ``supervised`` (`Enterprise Edition `_ only) + + Leadership in a replica set is controlled using an external failover agent. + + In the ``supervised`` mode, :ref:`database.mode ` and :ref:`.leader ` shouldn't be set explicitly. + + .. TODO: https://github.com/tarantool/enterprise_doc/issues/253 + + .. NOTE:: + + ``replication.failover`` can be defined in the global, group, and replica set :ref:`scope `. + + **Example** + + In the example below, the following configuration options are specified: + + - In the :ref:`credentials ` section, the ``replicator`` user with the ``replication`` role is created. + - :ref:`iproto.advertise.peer ` specifies that other instances should connect to an address defined in :ref:`iproto.listen ` using the ``replicator`` user. + - ``replication.failover`` specifies that a master instance should be set manually. + - :ref:`.leader ` sets ``instance001`` as a replica set leader. + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/manual_leader/config.yaml + :language: yaml + :end-before: Load sample data + :dedent: + + | Type: string + | Default: ``off`` + | Environment variable: TT_REPLICATION_FAILOVER + + +.. _configuration_reference_replication_peers: + +.. confval:: replication.peers + + URIs of instances that constitute a replica set. + These URIs are used by an instance to connect to another instance as a replica. + + Alternatively, you can use :ref:`iproto.advertise.peer ` to specify a URI used to advertise the current instance to other cluster members. + + **Example** + + In the example below, the following configuration options are specified: + + - In the :ref:`credentials ` section, the ``replicator`` user with the ``replication`` role is created. + - ``replication.peers`` specifies URIs of replica set instances. + + .. literalinclude:: /code_snippets/snippets/replication/instances.enabled/peers/config.yaml + :language: yaml + :start-at: credentials: + :end-at: - replicator:topsecret@127.0.0.1:3303 + :dedent: + + | Type: array + | Default: :ref:`box.NULL ` + | Environment variable: TT_REPLICATION_PEERS + + +.. _configuration_reference_replication_skip_conflict: + +.. confval:: replication.skip_conflict + + By default, if a replica adds a unique key that another replica has + added, replication :ref:`stops ` + with the ``ER_TUPLE_FOUND`` :ref:`error `. + If ``replication.skip_conflict`` is set to ``true``, such errors are ignored. + + .. NOTE:: + + Instead of saving the broken transaction to the write-ahead log, it is written as ``NOP`` (No operation). + + | Type: boolean + | Default: false + | Environment variable: TT_REPLICATION_SKIP_CONFLICT + + +.. _configuration_reference_replication_sync_lag: + +.. confval:: replication.sync_lag + + The maximum delay (in seconds) between the time when data is written to the master and the time when it is written to a replica. + If ``replication.sync_lag`` is set to ``nil`` or 365 * 100 * 86400 (``TIMEOUT_INFINITY``), + a replica is always considered to be "synced". + + .. NOTE:: + + This parameter is ignored during bootstrap. + See :ref:`orphan status ` for details. + + | Type: number + | Default: 10 + | Environment variable: TT_REPLICATION_SYNC_LAG + + +.. _configuration_reference_replication_sync_timeout: + +.. confval:: replication.sync_timeout + + The timeout (in seconds) that a node waits when trying to sync with + other nodes in a replica set after connecting or during a :ref:`configuration update `. + This could fail indefinitely if :ref:`replication.sync_lag ` is smaller than network latency, or if the replica cannot keep pace with master updates. + If ``replication.sync_timeout`` expires, the replica enters :ref:`orphan status `. + + | + | Type: number + | Default: 0 + | Environment variable: TT_REPLICATION_SYNC_TIMEOUT + + +.. _configuration_reference_replication_synchro_quorum: + +.. confval:: replication.synchro_quorum + + A number of replicas that should confirm the receipt of a :ref:`synchronous ` transaction before it can finish its commit. + + This option supports dynamic evaluation of the quorum number. + For example, the default value is ``N / 2 + 1`` where ``N`` is the current number of replicas registered in a cluster. + Once any replicas are added or removed, the expression is re-evaluated automatically. + + Note that the default value (``at least 50% of the cluster size + 1``) guarantees data reliability. + Using a value less than the canonical one might lead to unexpected results, + including a :ref:`split-brain `. + + ``replication.synchro_quorum`` is not used on replicas. If the master fails, the pending synchronous + transactions will be kept waiting on the replicas until a new master is elected. + + .. NOTE:: + + ``replication.synchro_quorum`` does not account for anonymous replicas. + + | Type: string, number + | Default: ``N / 2 + 1`` + | Environment variable: TT_REPLICATION_SYNCHRO_QUORUM + + +.. _configuration_reference_replication_synchro_timeout: + +.. confval:: replication.synchro_timeout + + For :ref:`synchronous replication ` only. + Specify how many seconds to wait for a synchronous transaction quorum + replication until it is declared failed and is rolled back. + + It is not used on replicas, so if the master fails, the pending synchronous + transactions will be kept waiting on the replicas until a new master is + elected. + + | + | Type: number + | Default: 5 + | Environment variable: TT_REPLICATION_SYNCHRO_TIMEOUT + + +.. _configuration_reference_replication_threads: + +.. confval:: replication.threads + + The number of threads spawned to decode the incoming replication data. + + In most cases, one thread is enough for all incoming data. + Possible values range from 1 to 1000. + If there are multiple replication threads, connections to serve are distributed evenly between the threads. + + | + | Type: integer + | Default: 1 + | Environment variable: TT_REPLICATION_THREADS + + +.. _configuration_reference_replication_timeout: + +.. confval:: replication.timeout + + A time interval (in seconds) used by a master to send heartbeat requests to a replica when there are no updates to send to this replica. + For each request, a replica should return a heartbeat acknowledgment. + + If a master or replica gets no heartbeat message for ``4 * replication.timeout`` seconds, a connection is dropped and a replica tries to reconnect to the master. + + See also: :ref:`Monitoring a replica set `. + + | + | Type: number + | Default: 1 + | Environment variable: TT_REPLICATION_TIMEOUT