Skip to content

[3pt] feedback: Configuration reference | Tarantool #2255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Tracked by #2640
TarantoolBot opened this issue Jul 23, 2021 · 2 comments
Open
Tracked by #2640

[3pt] feedback: Configuration reference | Tarantool #2255

TarantoolBot opened this issue Jul 23, 2021 · 2 comments
Labels
ecosystem [area] Task relates to Tarantool's ecosystem (connector, module, other non-server functionality) vshard [area] Related to vshard module

Comments

@TarantoolBot
Copy link
Collaborator

TarantoolBot commented Jul 23, 2021

Root document: https://www.tarantool.io/en/doc/latest/reference/reference_rock/vshard/vshard_ref/
SME: @ Gerold103

Details

Feedback:
"

<…>efining the logical topology of the sharded Tarantool cluster.

|Type: table|
Default: false
Dynamic: yes

weights
A field defining the c<…>

Where I can find description of content of this table?"

Ask @ Gerold103 if it's an issue worth working on.
Do users really need to know the content of the table?
If yes, then we need to update the description of this parameter.

@art-dr art-dr added this to the Estimate [@arctic_dreamer] milestone Aug 18, 2021
@art-dr art-dr added vshard [area] Related to vshard module ecosystem [area] Task relates to Tarantool's ecosystem (connector, module, other non-server functionality) labels Aug 20, 2021
@art-dr art-dr changed the title feedback: Configuration reference | Tarantool [2pt] feedback: Configuration reference | Tarantool Aug 20, 2021
@art-dr art-dr removed this from the Estimate [@arctic_dreamer] milestone Aug 20, 2021
@veod32 veod32 added this to the vshard doc issues milestone Aug 27, 2021
@Gerold103
Copy link
Contributor

sharding is a table of replicaset UUID -> replicaset config pairs.

Each replicaset config should contain a table of replicas with key replicas. The table maps replica UUID -> replica cfg.

Each replica config, in turn, must have a key uri which tells host + port + credentials: login and password. All in a standard form: 'login:password@host:port' with port, login, and password being optional. Valid examples are: 'host.ru:123', 'admin:pass@192.168.0.1', etc.

Replica config can have optional keys.

  • name - a string which just can be added to the config like a comment. It does not do anything. Default is nil.
  • zone - zone where the replica is located. String or number. See weights root config option for more info about what zones do. Default is nil. If a replica has no zone, in the weights matrix it gets the worst score.
  • master - a flag whether the replica is a master. Default is nil (not a master). At most one replica in replicaset can have this flag. But it is possible to set it to false/nil to all replicas - then read-write requests won't work on it.

The replicas table in replicaset config can be empty. Then the replicaset won't receive any requests at all.

Replicaset config can also have 3 optional keys:

  • weight - weight of the replicaset defines how many buckets does it get in relation to other replicasets. For example, if replicaset1 has weight 50 and replicaset2 has weight 100, the latter gets x2 buckets compared to replicaset1. Default is 1.
  • lock - flag whether the replicaset is locked. If set to true, the replicaset won't send nor receive any buckets. Rebalancer will simply ignore it. To work this option must be set at least on the affected replicaset itself in its vshard.storage.cfg. Default is false.
  • master - a string which describes how to find who is master in this replicaset. The option works only on the routers so far. When set to 'auto', all the replica configs in this replicaset must have master = nil. The router will find the master automatically depending on how the replicas are configured on their nodes. When master changes, the router will notice the changes also automatically. Default is nil - then the router will not do any automatic discovery in this replicaset. Different replicasets of the same config can have different values for this option.

Example of a simple config:

sharding = {
    [storage1_uuid] = {
        replicas = {
            [storage1a_uuid] = {
                uri = 'storage:storage@127.0.0.1:3301',
                name = 'storage_1_a',
                master = true
            },
            [storage1b_uuid] = {
                uri = 'storage:storage@127.0.0.1:3302',
                name = 'storage_1_b'
            }
        },
    },
    [storage2_uuid] = {
        replicas = {
            [storage2a_uuid] = {
                uri = 'storage:storage@127.0.0.1:3303',
                name = 'storage_2_a',
                master = true
            },
            [storage2b_uuid] = {
                uri = 'storage:storage@127.0.0.1:3304',
                name = 'storage_2_b'
            }
        },
    },
}

Here replicaset storage1 has 2 replicas called storage1a and storage1b. Storage1a is the master. Replicaset storage2 has 2 replicas - storage2a and storage2b. Storage2a is the master.

Both replicasets have equal number of buckets. If the config is applied on router, master auto-discovery is turned off. There are no zones or locks.

@Totktonada
Copy link
Member

(I'm the reporter.)

@Gerold103 Thanks for the detailed description!

Ask @ Gerold103 if it's an issue worth working on.
Do users really need to know the content of the table?
If yes, then we need to update the description of this parameter.

The options, which are described on this page, are constructed by a user and passed into vshard.storage.cfg(<...>) and vshard.router.cfg(<...>) functions. A user should know how to construct them in order to configure vshard storages and routers.

@art-dr art-dr changed the title [2pt] feedback: Configuration reference | Tarantool [3pt] feedback: Configuration reference | Tarantool Oct 8, 2021
@patiencedaur patiencedaur mentioned this issue Feb 1, 2022
21 tasks
@patiencedaur patiencedaur removed this from the vshard milestone Feb 1, 2022
@TarantoolBot TarantoolBot removed the 3sp label Jun 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ecosystem [area] Task relates to Tarantool's ecosystem (connector, module, other non-server functionality) vshard [area] Related to vshard module
Projects
None yet
Development

No branches or pull requests

6 participants