From 25ea5dca5eafaac7101436cc0a6fa8da28e08ebe Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Tue, 13 Feb 2024 16:20:44 +0300 Subject: [PATCH 1/5] Sharding get started --- .../sharded_cluster/README.md | 69 +-- doc/how-to/vshard_quick.rst | 585 ++++++++++++++---- 2 files changed, 460 insertions(+), 194 deletions(-) diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md index cf65ab3619..bde60f127f 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/README.md @@ -1,73 +1,10 @@ # Sharded cluster -A sample application demonstrating how to configure a [sharded](https://www.tarantool.io/en/doc/latest/concepts/sharding/) cluster. +A sample application created in the [Creating a sharded cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/) tutorial. ## Running -To run the cluster, go to the `sharding` directory in the terminal and perform the following steps: - -1. Install dependencies defined in the `*.rockspec` file: - - ```console - $ tt build sharded_cluster - ``` - -2. Run the cluster: - - ```console - $ tt start sharded_cluster - ``` - -3. Connect to the router: - - ```console - $ tt connect sharded_cluster:router-a-001 - ``` - -4. Perform the initial cluster bootstrap: - - ```console - sharded_cluster:router-a-001> require('vshard').router.bootstrap() - --- - - true - ... - ``` - -5. Insert test data: - - ```console - sharded_cluster:router-a-001> insert_data() - --- - ... - ``` - -6. Connect to storages in different replica sets to see how data is distributed across nodes: - - a. `storage-a-001`: - - ```console - sharded_cluster:storage-a-001> box.space.bands:select() - --- - - - [1, 614, 'Roxette', 1986] - - [2, 986, 'Scorpions', 1965] - - [5, 755, 'Pink Floyd', 1965] - - [7, 998, 'The Doors', 1965] - - [8, 762, 'Nirvana', 1987] - ... - ``` - - b. `storage-b-001`: - - ```console - sharded_cluster:storage-b-001> box.space.bands:select() - --- - - - [3, 11, 'Ace of Base', 1987] - - [4, 42, 'The Beatles', 1960] - - [6, 55, 'The Rolling Stones', 1962] - - [9, 299, 'Led Zeppelin', 1968] - - [10, 167, 'Queen', 1970] - ... - ``` +To learn how to run the cluster, see the [Working with the cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/#working-with-the-cluster) section. ## Packaging @@ -77,5 +14,3 @@ To package an application into a `.tgz` archive, use the `tt pack` command: ```console $ tt pack tgz --app-list sharded_cluster ``` - -Note that the necessary `vshard` dependency is specified in the [sharded_cluster-scm-1.rockspec](sharded_cluster-scm-1.rockspec) file. diff --git a/doc/how-to/vshard_quick.rst b/doc/how-to/vshard_quick.rst index 61f8df9d09..e48dc77c2d 100644 --- a/doc/how-to/vshard_quick.rst +++ b/doc/how-to/vshard_quick.rst @@ -1,160 +1,491 @@ .. _vshard-quick-start: -Quick start with sharding -========================= +Creating a sharded cluster +========================== -For installation instructions, check out the :ref:`vshard installation manual `. +**Example on GitHub**: `sharded_cluster `_ -For a pre-configured development cluster, check out the ``example/`` directory in -the `vshard repository `__. -This example includes 5 Tarantool instances and 2 replica sets: +In this tutorial, you get a sharded cluster up and running on your local machine and learn how to manage the cluster using the tt utility. +To enable sharding in the cluster, the :ref:`vshard ` module is used. -* ``router_1`` – a ``router`` instance -* ``storage_1_a`` – a ``storage`` instance, the **master** of the **first** replica set -* ``storage_1_b`` – a ``storage`` instance, the **replica** of the **first** replica set -* ``storage_2_a`` – a ``storage`` instance, the **master** of the **second** replica set -* ``storage_2_b`` – a ``storage`` instance, the **replica** of the **second** replica set +The cluster created in this tutorial includes 5 instances: one router and 4 storages, which constitute two replica sets. -All instances are managed using the :ref:`tt ` administrative utility. +.. image:: /book/admin/admin_instances_dev.png + :align: left + :width: 700 + :alt: Cluster topology -Change the directory to ``example/`` and use ``make`` to run the development cluster: -.. code-block:: console +.. _vshard-quick-start-prerequisites: - $ cd example/ - $ make +Prerequisites +------------- -Essential ``make`` commands you need to know: +Before starting this tutorial: -* ``make start`` – start all Tarantool instances -* ``make stop`` – stop all Tarantool instances -* ``make logcat`` – show logs from all instances -* ``make enter`` – enter the admin console on ``router_1`` -* ``make clean`` – clean up all persistent data -* ``make test`` – run the test suite (you can also run ``test-run.py`` in the ``test`` directory) -* ``make`` – execute ``make stop``, ``make clean``, ``make start`` and ``make enter`` +* :ref:`Install the tt ` utility. +* `Install tarantool `_. -For example, to start all instances, use ``make start``: + .. NOTE:: -.. code-block:: console + The tt utility provides the ability to install Tarantool software using the :ref:`tt install ` command. - $ make start - $ ps x|grep tarantool - 46564 ?? Ss 0:00.34 tarantool storage_1_a.lua - 46566 ?? Ss 0:00.19 tarantool storage_1_b.lua - 46568 ?? Ss 0:00.35 tarantool storage_2_a.lua - 46570 ?? Ss 0:00.20 tarantool storage_2_b.lua - 46572 ?? Ss 0:00.25 tarantool router_1.lua -To perform commands in the admin console, use the router's -:ref:`public API `: +.. _vshard-quick-start-creating-app: -.. code-block:: tarantoolsession +Creating a cluster application +------------------------------ - unix/:./data/router_1.control> vshard.router.info() +The :ref:`tt create ` command can be used to create an application from a predefined or custom template. +For example, the built-in ``vshard_cluster`` template enables you to create a ready-to-run sharded cluster application. + +In this tutorial, the application layout is prepared manually: + +1. Create a tt environment in the current directory by executing the :ref:`tt init ` command. + +2. Inside the ``instances.enabled`` directory of the created tt environment, create the ``sharded_cluster`` directory. + +3. Inside ``instances.enabled/sharded_cluster``, create the following files: + + - ``instances.yml`` specifies instances to run in the current environment. + - The ``config.yaml`` file is intended to store the cluster's :ref:`configuration `. + - ``storage.lua`` is intended to store code specific for :ref:`storages `. + - ``router.lua`` is intended to store code specific for a :ref:`router `. + - ``sharded_cluster-scm-1.rockspec`` includes external dependencies required by the application. + + The next :ref:`vshard-quick-start-developing-app` section shows how to configure the cluster and write code specific for a router and storages. + + +.. _vshard-quick-start-developing-app: + +Developing the application +-------------------------- + +.. _vshard-quick-start-configuring-instances: + +Configuring instances to run +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Open the ``instances.yml`` file and add the following content: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml + :language: yaml + :dedent: + +This file specifies instances to run in the current environment. + + +.. _vshard-quick-start-configuring-cluster: + +Configuring the cluster +~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to configure the cluster in the ``config.yaml`` file. + +.. _vshard-quick-start-configuring-cluster-credentials: + +Step 1: Configuring credentials +******************************* + +Add the :ref:`credentials ` configuration section: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :start-at: credentials: + :end-at: roles: [sharding] + :dedent: + +In this section, two users are created: + +* The ``replicator`` user with the ``replication`` role. +* The ``storage`` user with the ``sharding`` role. + + +.. _vshard-quick-start-configuring-cluster-advertise: + +Step 2: Specifying advertise URIs +********************************* + +Add the :ref:`iproto.advertise ` section: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :start-after: roles: [sharding] + :end-at: login: storage + :dedent: + +In this section, the following options are configured: + +* ``iproto.advertise.peer`` specifies how to advertise the current instance to other cluster members. + In particular, this option informs other replica set members that the ``replicator`` user should be used to connect to the current instance. +* ``iproto.advertise.sharding`` specifies how to advertise the current instance to a router and rebalancer. + + +.. _vshard-quick-start-configuring-cluster-bucket-count: + +Step 3: Configuring bucket count +******************************** + +Specify the total number of buckets in a sharded cluster using the ``sharding.bucket_count`` option: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :start-after: login: storage + :end-at: bucket_count + :dedent: + + +.. _vshard-quick-start-configuring-cluster-topology: + +Step 4: Defining the cluster topology +************************************* + +Define the cluster's topology inside the :ref:`groups ` section. +The cluster includes two groups: + +* ``storages`` includes two replica sets. Each replica set contains two instances. +* ``routers`` includes one router instance. + +Here is a schematic view of the cluster's topology: + +.. code-block:: yaml + + groups: + storages: + replicasets: + storage-a: + # ... + storage-b: + # ... + routers: + replicasets: + router-a: + # ... + +1. To configure storages, add the following code inside the ``groups`` section: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :start-at: storages: + :end-before: routers: + :dedent: + + The main top-level options here are: + + * ``app``: The ``app.module`` option specifies that code specific to storages should be loaded from the ``storage`` module. See also: :ref:`vshard-quick-start-storage-code`. + * ``sharding``: The ``sharding.roles`` option specifies that all instances inside this group act as storages. + A rebalancer is selected automatically from two master instances. + * ``replication``: The :ref:`replication.failover ` option specifies that a leader in each replica set should be specified manually. + * ``replicasets``: This section configures two replica sets that constitute cluster storages. + + +2. To configure a router, add the following code inside the ``groups`` section: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :start-at: routers: + :end-at: 127.0.0.1:3300 + :dedent: + + The main top-level options here are: + + * ``app``: The ``app.module`` option specifies that code specific to a router should be loaded from the ``router`` module. See also: :ref:`vshard-quick-start-router-code`. + * ``sharding``: The ``sharding.roles`` option specifies that an instance inside this group acts as a router. + * ``replicasets``: This section configures one replica set with one router instance. + + +Resulting configuration +*********************** + +The resulting cluster configuration should look as follows: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml + :language: yaml + :dedent: + + +.. _vshard-quick-start-storage-code: + +Adding storage code +~~~~~~~~~~~~~~~~~~~ + +1. Open the ``storage.lua`` file and create a space using the :ref:`box.schema.space.create() ` function: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua + :language: lua + :start-at: box.schema.create_space + :end-before: box.space.bands:create_index('id' + :dedent: + + Note that the created ``bands`` spaces includes the ``bucket_id`` field. + This field represents a sharding key used to partition a dataset across different storage instances. + +2. Create two indexes based on the ``id`` and ``bucket_id`` fields: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua + :language: lua + :start-at: box.space.bands:create_index('id' + :end-at: box.space.bands:create_index('bucket_id' + :dedent: + +3. Define the ``insert_band`` function that inserts a tuple into the created space: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua + :language: lua + :start-at: function insert_band + :end-before: function get_band + :dedent: + +4. Define the ``get_band`` function that returns data without the ``bucket_id`` value: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua + :language: lua + :start-at: function get_band + :dedent: + +The resulting ``storage.lua`` file should look as follows: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua + :language: lua + :dedent: + + +.. _vshard-quick-start-router-code: + +Adding router code +~~~~~~~~~~~~~~~~~~ + +1. Open the ``router.lua`` file and load the ``vshard`` module as follows: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua + :language: lua + :start-at: local vshard + :end-at: local vshard + :dedent: + +2. Define the ``put`` function used to write data to a storage: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua + :language: lua + :start-at: function put + :end-before: function get + :dedent: + + The following ``vshard`` router functions are used: + + * :ref:`vshard.router.bucket_id_mpcrc32() `: Calculates a bucket ID value using a hash function. + * :ref:`vshard.router.callrw() `: Inserts a tuple to a storage identified the generated bucket ID. + +3. Create the ``get`` function for getting data: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua + :language: lua + :start-at: function get + :end-before: function insert_data + :dedent: + + Inside this function, :ref:`vshard.router.callro() ` is called to get data from a storage identified the generated bucket ID. + +4. Finally, create the ``insert_data()`` function that inserts sample data into the created space: + + .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua + :language: lua + :start-at: function insert_data + :dedent: + +The resulting ``router.lua`` file should look as follows: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua + :language: lua + :dedent: + + + +.. _vshard-quick-start-build-settings: + +Configuring build settings +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Open the ``sharded_cluster-scm-1.rockspec`` file and add the following content: + +.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/sharded_cluster-scm-1.rockspec + :language: none + :dedent: + +In the ``dependencies`` section, you can see the specified version of the ``vshard`` module. +To install dependencies, you need to :ref:`build the application `. + + +.. _vshard-quick-start-building-app: + +Building the application +------------------------ + +In the terminal, open a directory where the :ref:`tt environment is created `. +Then, execute the ``tt build`` command: + +.. code-block:: console + + $ tt build sharded_cluster + • Running rocks make + No existing manifest. Attempting to rebuild... + • Application was successfully built + +This installs the ``vshard`` dependency defined in the :ref:`*.rockspec ` file to the ``.rocks`` directory. + + + +.. _vshard-quick-start-working-cluster: + +Working with the cluster +------------------------ + +.. _vshard-quick-start-working-starting-instances: + +Starting instances +~~~~~~~~~~~~~~~~~~ + +To start all instances in the cluster, execute the ``tt start`` command: + +.. code-block:: console + + $ tt start sharded_cluster + • Starting an instance [sharded_cluster:storage-a-001]... + • Starting an instance [sharded_cluster:storage-a-002]... + • Starting an instance [sharded_cluster:storage-b-001]... + • Starting an instance [sharded_cluster:storage-b-002]... + • Starting an instance [sharded_cluster:router-a-001]... + + +.. _vshard-quick-start-working-bootstrap: + +Bootstrapping a cluster +~~~~~~~~~~~~~~~~~~~~~~~ + +To bootstrap the cluster, follow the steps below: + +1. Connect to the router instance using ``tt connect``: + + .. code-block:: console + + $ tt connect sharded_cluster:router-a-001 + • Connecting to the instance... + • Connected to sharded_cluster:router-a-001 + +2. Call :ref:`vshard.router.bootstrap() ` to perform the initial cluster bootstrap: + + .. code-block:: console + + sharded_cluster:router-a-001> vshard.router.bootstrap() + --- + - true + ... + + +.. _vshard-quick-start-working-status: + +Checking status +~~~~~~~~~~~~~~~ + +To check the cluster's status, execute :ref:`vshard.router.info() ` on the router: + +.. code-block:: console + + sharded_cluster:router-a-001> vshard.router.info() --- - replicasets: - ac522f65-aa94-4134-9f64-51ee384f1a54: - replica: &0 + storage-b: + replica: + network_timeout: 0.5 + status: available + uri: storage@127.0.0.1:3304 + name: storage-b-002 + bucket: + available_rw: 500 + master: network_timeout: 0.5 status: available uri: storage@127.0.0.1:3303 - uuid: 1e02ae8a-afc0-4e91-ba34-843a356b8ed7 - uuid: ac522f65-aa94-4134-9f64-51ee384f1a54 - master: *0 - cbf06940-0790-498b-948d-042b62cf3d29: - replica: &1 + name: storage-b-001 + name: storage-b + storage-a: + replica: + network_timeout: 0.5 + status: available + uri: storage@127.0.0.1:3302 + name: storage-a-002 + bucket: + available_rw: 500 + master: network_timeout: 0.5 status: available uri: storage@127.0.0.1:3301 - uuid: 8a274925-a26d-47fc-9e1b-af88ce939412 - uuid: cbf06940-0790-498b-948d-042b62cf3d29 - master: *1 + name: storage-a-001 + name: storage-a bucket: unreachable: 0 available_ro: 0 unknown: 0 - available_rw: 3000 + available_rw: 1000 status: 0 alerts: [] ... -.. _vshard-config-cluster-example: - -Sample configuration --------------------- - -The configuration of a simple sharded cluster can look like this: - -.. code-block:: kconfig - - local cfg = { - memtx_memory = 100 * 1024 * 1024, - bucket_count = 10000, - rebalancer_disbalance_threshold = 10, - rebalancer_max_receiving = 100, - sharding = { - ['cbf06940-0790-498b-948d-042b62cf3d29'] = { - replicas = { - ['8a274925-a26d-47fc-9e1b-af88ce939412'] = { - uri = 'storage:storage@127.0.0.1:3301', - name = 'storage_1_a', - master = true - }, - ['3de2e3e1-9ebe-4d0d-abb1-26d301b84633'] = { - uri = 'storage:storage@127.0.0.1:3302', - name = 'storage_1_b' - } - }, - }, - ['ac522f65-aa94-4134-9f64-51ee384f1a54'] = { - replicas = { - ['1e02ae8a-afc0-4e91-ba34-843a356b8ed7'] = { - uri = 'storage:storage@127.0.0.1:3303', - name = 'storage_2_a', - master = true - }, - ['001688c3-66f8-4a31-8e19-036c17d489c2'] = { - uri = 'storage:storage@127.0.0.1:3304', - name = 'storage_2_b' - } - }, - }, - }, - } - -This cluster includes one ``router`` instance and two ``storage`` instances. -Each ``storage`` instance includes one master and one replica. -The ``sharding`` field defines the logical topology of a sharded Tarantool cluster. -All the other fields are passed to ``box.cfg()`` as they are, without modifications. -See the :ref:`Configuration reference ` section for details. - -On routers, call ``vshard.router.cfg(cfg)``: - -.. code-block:: lua - - cfg.listen = 3300 - - -- Start the database with sharding - vshard = require('vshard') - vshard.router.cfg(cfg) - -On storages, call ``vshard.storage.cfg(cfg, instance_uuid)``: - -.. code-block:: lua - - -- Get instance name - local MY_UUID = "de0ea826-e71d-4a82-bbf3-b04a6413e417" - - -- Call a configuration provider - local cfg = require('localcfg') - - -- Start the database with sharding - vshard = require('vshard') - vshard.storage.cfg(cfg, MY_UUID) - -``vshard.storage.cfg()`` automatically calls ``box.cfg()`` and configures the listen -port and replication parameters. - -For a sample configuration, see ``router.lua`` and ``storage.lua`` in the -``example/`` directory of the `vshard repository `__. + +.. _vshard-quick-start-working-adding-data: + +Adding data +~~~~~~~~~~~ + +To check how data is distributed across the cluster's nodes, follow the steps below: + +1. On the router, call the ``insert_data()`` function: + + .. code-block:: console + + sharded_cluster:router-a-001> insert_data() + --- + ... + +2. Connect to any storage in the ``storage-a`` replica set: + + .. code-block:: console + + $ tt connect sharded_cluster:storage-a-001 + • Connecting to the instance... + • Connected to sharded_cluster:storage-a-001 + + Then, select all tuples in the ``bands`` space: + + .. code-block:: console + + sharded_cluster:storage-a-001> box.space.bands:select() + --- + - - [1, 614, 'Roxette', 1986] + - [2, 986, 'Scorpions', 1965] + - [5, 755, 'Pink Floyd', 1965] + - [7, 998, 'The Doors', 1965] + - [8, 762, 'Nirvana', 1987] + ... + + +3. Connect to any storage in the ``storage-b`` replica set: + + .. code-block:: console + + $ tt connect sharded_cluster:storage-b-001 + • Connecting to the instance... + • Connected to sharded_cluster:storage-b-001 + + Select all tuples in the ``bands`` space to make sure it contains another subset of data: + + .. code-block:: console + + sharded_cluster:storage-b-001> box.space.bands:select() + --- + - - [3, 11, 'Ace of Base', 1987] + - [4, 42, 'The Beatles', 1960] + - [6, 55, 'The Rolling Stones', 1962] + - [9, 299, 'Led Zeppelin', 1968] + - [10, 167, 'Queen', 1970] + ... From cd4a17e16d26213f21b9644e833ced84459859b6 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Wed, 14 Feb 2024 15:03:31 +0300 Subject: [PATCH 2/5] Sharding get started: update vshard version --- .../sharded_cluster/sharded_cluster-scm-1.rockspec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/sharded_cluster-scm-1.rockspec b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/sharded_cluster-scm-1.rockspec index cc9d8ca85b..d88dc912ee 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/sharded_cluster-scm-1.rockspec +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/sharded_cluster-scm-1.rockspec @@ -5,7 +5,7 @@ source = { } dependencies = { - 'vshard == 0.1.25' + 'vshard == 0.1.26' } build = { type = 'none'; From 196a232d622acb226fb761be9e193a66dbb8db93 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Wed, 14 Feb 2024 15:03:57 +0300 Subject: [PATCH 3/5] Sharding get started: update function names on storages --- .../sharding/instances.enabled/sharded_cluster/router.lua | 4 ++-- .../sharding/instances.enabled/sharded_cluster/storage.lua | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua index e2c3371909..27fbbd707f 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua @@ -2,12 +2,12 @@ local vshard = require('vshard') function put(id, band_name, year) local bucket_id = vshard.router.bucket_id_mpcrc32({ id }) - vshard.router.callrw(bucket_id, 'put', { id, bucket_id, band_name, year }) + vshard.router.callrw(bucket_id, 'insert_band', { id, bucket_id, band_name, year }) end function get(id) local bucket_id = vshard.router.bucket_id_mpcrc32({ id }) - return vshard.router.callro(bucket_id, 'get', { id }) + return vshard.router.callro(bucket_id, 'get_band', { id }) end function insert_data() diff --git a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua index fb9a932349..cd52094de7 100644 --- a/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua +++ b/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster/storage.lua @@ -10,11 +10,11 @@ box.schema.create_space('bands', { box.space.bands:create_index('id', { parts = { 'id' }, if_not_exists = true }) box.space.bands:create_index('bucket_id', { parts = { 'id' }, unique = false, if_not_exists = true }) -function put(id, bucket_id, band_name, year) +function insert_band(id, bucket_id, band_name, year) box.space.bands:insert({ id, bucket_id, band_name, year }) end -function get(id) +function get_band(id) local tuple = box.space.bands:get(id) if tuple == nil then return nil From d4caf1da2b5e58d1b4d272741da72a9414385670 Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Fri, 16 Feb 2024 14:20:43 +0300 Subject: [PATCH 4/5] Sharding get started: update per review --- doc/how-to/vshard_quick.rst | 82 ++++++++++++++++++++++++++----------- 1 file changed, 58 insertions(+), 24 deletions(-) diff --git a/doc/how-to/vshard_quick.rst b/doc/how-to/vshard_quick.rst index e48dc77c2d..49950aed00 100644 --- a/doc/how-to/vshard_quick.rst +++ b/doc/how-to/vshard_quick.rst @@ -95,11 +95,17 @@ Add the :ref:`credentials ` configuration s :end-at: roles: [sharding] :dedent: -In this section, two users are created: +In this section, two users with the specified passwords are created: * The ``replicator`` user with the ``replication`` role. * The ``storage`` user with the ``sharding`` role. +.. WARNING:: + + It is recommended to load passwords from safe storage such as external files or environment variables. + You can learn how to do this from :ref:`configuration_credentials_loading_secrets`. + + .. _vshard-quick-start-configuring-cluster-advertise: @@ -170,9 +176,9 @@ Here is a schematic view of the cluster's topology: :end-before: routers: :dedent: - The main top-level options here are: + The main group-level options here are: - * ``app``: The ``app.module`` option specifies that code specific to storages should be loaded from the ``storage`` module. See also: :ref:`vshard-quick-start-storage-code`. + * ``app``: The ``app.module`` option specifies that code specific to storages should be loaded from the ``storage`` module. This is explained below in the :ref:`vshard-quick-start-storage-code` section. * ``sharding``: The ``sharding.roles`` option specifies that all instances inside this group act as storages. A rebalancer is selected automatically from two master instances. * ``replication``: The :ref:`replication.failover ` option specifies that a leader in each replica set should be specified manually. @@ -187,9 +193,9 @@ Here is a schematic view of the cluster's topology: :end-at: 127.0.0.1:3300 :dedent: - The main top-level options here are: + The main group-level options here are: - * ``app``: The ``app.module`` option specifies that code specific to a router should be loaded from the ``router`` module. See also: :ref:`vshard-quick-start-router-code`. + * ``app``: The ``app.module`` option specifies that code specific to a router should be loaded from the ``router`` module. This is explained below in the :ref:`vshard-quick-start-router-code` section. * ``sharding``: The ``sharding.roles`` option specifies that an instance inside this group acts as a router. * ``replicasets``: This section configures one replica set with one router instance. @@ -312,7 +318,7 @@ Open the ``sharded_cluster-scm-1.rockspec`` file and add the following content: :language: none :dedent: -In the ``dependencies`` section, you can see the specified version of the ``vshard`` module. +The ``dependencies`` section includes the specified version of the ``vshard`` module. To install dependencies, you need to :ref:`build the application `. @@ -432,14 +438,12 @@ To check the cluster's status, execute :ref:`vshard.router.info() ` function on the router: .. code-block:: console @@ -447,7 +451,36 @@ To check how data is distributed across the cluster's nodes, follow the steps be --- ... -2. Connect to any storage in the ``storage-a`` replica set: + Calling this function :ref:`distributes data ` evenly across the cluster's nodes. + +2. To get a tuple by the specified ID, call the ``get()`` function: + + .. code-block:: console + + sharded_cluster:router-a-001> get(4) + --- + - [4, 'The Beatles', 1960] + ... + +3. To insert a new tuple, call the ``put()`` function: + + .. code-block:: console + + sharded_cluster:router-a-001> put(11, 'The Who', 1962) + --- + ... + + + + +.. _vshard-quick-start-working-adding-data: + +Checking data distribution +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To check how data is distributed across the cluster's nodes, follow the steps below: + +1. Connect to any storage in the ``storage-a`` replica set: .. code-block:: console @@ -461,15 +494,16 @@ To check how data is distributed across the cluster's nodes, follow the steps be sharded_cluster:storage-a-001> box.space.bands:select() --- - - - [1, 614, 'Roxette', 1986] - - [2, 986, 'Scorpions', 1965] - - [5, 755, 'Pink Floyd', 1965] - - [7, 998, 'The Doors', 1965] - - [8, 762, 'Nirvana', 1987] + - - [3, 11, 'Ace of Base', 1987] + - [4, 42, 'The Beatles', 1960] + - [6, 55, 'The Rolling Stones', 1962] + - [9, 299, 'Led Zeppelin', 1968] + - [10, 167, 'Queen', 1970] + - [11, 70, 'The Who', 1962] ... -3. Connect to any storage in the ``storage-b`` replica set: +2. Connect to any storage in the ``storage-b`` replica set: .. code-block:: console @@ -483,9 +517,9 @@ To check how data is distributed across the cluster's nodes, follow the steps be sharded_cluster:storage-b-001> box.space.bands:select() --- - - - [3, 11, 'Ace of Base', 1987] - - [4, 42, 'The Beatles', 1960] - - [6, 55, 'The Rolling Stones', 1962] - - [9, 299, 'Led Zeppelin', 1968] - - [10, 167, 'Queen', 1970] + - - [1, 614, 'Roxette', 1986] + - [2, 986, 'Scorpions', 1965] + - [5, 755, 'Pink Floyd', 1965] + - [7, 998, 'The Doors', 1965] + - [8, 762, 'Nirvana', 1987] ... From 514b753f344cec42daa320e512ba21ce33d42c2a Mon Sep 17 00:00:00 2001 From: andreyaksenov Date: Mon, 19 Feb 2024 13:23:29 +0300 Subject: [PATCH 5/5] Sharded cluster: update per TW review --- doc/how-to/vshard_quick.rst | 44 +++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/doc/how-to/vshard_quick.rst b/doc/how-to/vshard_quick.rst index 49950aed00..211c3b7fc5 100644 --- a/doc/how-to/vshard_quick.rst +++ b/doc/how-to/vshard_quick.rst @@ -43,17 +43,17 @@ In this tutorial, the application layout is prepared manually: 1. Create a tt environment in the current directory by executing the :ref:`tt init ` command. -2. Inside the ``instances.enabled`` directory of the created tt environment, create the ``sharded_cluster`` directory. +2. Inside the empty ``instances.enabled`` directory of the created tt environment, create the ``sharded_cluster`` directory. 3. Inside ``instances.enabled/sharded_cluster``, create the following files: - ``instances.yml`` specifies instances to run in the current environment. - - The ``config.yaml`` file is intended to store the cluster's :ref:`configuration `. - - ``storage.lua`` is intended to store code specific for :ref:`storages `. - - ``router.lua`` is intended to store code specific for a :ref:`router `. - - ``sharded_cluster-scm-1.rockspec`` includes external dependencies required by the application. + - ``config.yaml`` specifies the cluster's :ref:`configuration `. + - ``storage.lua`` contains code specific for :ref:`storages `. + - ``router.lua`` contains code specific for a :ref:`router `. + - ``sharded_cluster-scm-1.rockspec`` specifies external dependencies required by the application. - The next :ref:`vshard-quick-start-developing-app` section shows how to configure the cluster and write code specific for a router and storages. + The next :ref:`vshard-quick-start-developing-app` section shows how to configure the cluster and write code for routing read and write requests to different storages. .. _vshard-quick-start-developing-app: @@ -100,10 +100,12 @@ In this section, two users with the specified passwords are created: * The ``replicator`` user with the ``replication`` role. * The ``storage`` user with the ``sharding`` role. -.. WARNING:: +These users are intended to maintain replication and sharding in the cluster. - It is recommended to load passwords from safe storage such as external files or environment variables. - You can learn how to do this from :ref:`configuration_credentials_loading_secrets`. +.. IMPORTANT:: + + It is not recommended to store passwords as plain text in a YAML configuration. + Learn how to load passwords from safe storage such as external files or environment variables from :ref:`configuration_credentials_loading_secrets`. @@ -132,7 +134,7 @@ In this section, the following options are configured: Step 3: Configuring bucket count ******************************** -Specify the total number of buckets in a sharded cluster using the ``sharding.bucket_count`` option: +Specify the total number of :ref:`buckets ` in a sharded cluster using the ``sharding.bucket_count`` option: .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml :language: yaml @@ -203,7 +205,7 @@ Here is a schematic view of the cluster's topology: Resulting configuration *********************** -The resulting cluster configuration should look as follows: +The resulting ``config.yaml`` file should look as follows: .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/config.yaml :language: yaml @@ -269,7 +271,7 @@ Adding router code :end-at: local vshard :dedent: -2. Define the ``put`` function used to write data to a storage: +2. Define the ``put`` function that specifies how the router selects the storage to write data: .. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/router.lua :language: lua @@ -327,7 +329,7 @@ To install dependencies, you need to :ref:`build the application `. +In the terminal, open the :ref:`tt environment directory `. Then, execute the ``tt build`` command: .. code-block:: console @@ -368,7 +370,7 @@ To start all instances in the cluster, execute the ``tt start`` command: Bootstrapping a cluster ~~~~~~~~~~~~~~~~~~~~~~~ -To bootstrap the cluster, follow the steps below: +After starting instances, you need to bootstrap the cluster as follows: 1. Connect to the router instance using ``tt connect``: @@ -437,11 +439,19 @@ To check the cluster's status, execute :ref:`vshard.router.info() ` function on the router: