Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide API to obtain the DC information of partitions #20911

Closed
djshow832 opened this issue Nov 6, 2020 · 1 comment · Fixed by #20931
Closed

Provide API to obtain the DC information of partitions #20911

djshow832 opened this issue Nov 6, 2020 · 1 comment · Fixed by #20931
Assignees
Labels
component/infoschema type/enhancement The issue or PR belongs to an enhancement.

Comments

@djshow832
Copy link
Contributor

Development Task

Local transactions need to check whether the partitions they write are bound to the specific scope.
This is a part of the global/local transaction roadmap #20448.

For example:

  1. One TiDB instance is configured with txn_scope=bj, which means the TiDB instance is located in Beijing DC.
  2. This TiDB instance typically starts local transactions and the TSO is allocated from a PD in Beijing DC.
  3. These transactions can only visit data whose leaders are placed on TiKV instances in Beijing DC.

In the 3rd step, we need to check the partition placement constraint, which is issued in #20827. To check this constraint, we need to obtain the DC information of partitions.

Rationale

Placement rules can define the placement of partition leaders. For example, you can put the leader of partition p0 to Beijing DC through following SQL:

ALTER TABLE user ALTER PARTITION p0
    ALTER PLACEMENT POLICY ROLE=leader CONSTRAINTS='["+dc=bj"]';

CONSTRAINTS contains placement constraints which partitions must comply with. The key dc is the label key defined in TiKV configurations, and bj is the corresponding label value. For example, a TiKV instance may contain such label configuration:

[server]
labels = "dc=bj,rack=rack0,disk=hdd"

The constraint above is satisfied with this configuration, so the leader of p0 may be scheduled on this TiKV instance.

Now that placement rules have bound partitions to data centers, we can get DC information for partitions from placement rules.

Placement rules are persistent on PD but also cached in all TiDB instances. The schema version mechanism described in #20809 guarantees that each TiDB has the same copy of the placement rules.

Implementations

Placement rules are grouped by partitions. Each partition has a placement rule group, which is named after the partition id. Assuming that the partition id of p0 is 50, then the SQL above should produce such a placement rule group:

{
    "group_id": "TiDB_DDL_50",
    "group_index": 3,
    "group_override": true,
    "rules": [
      {
        "group_id": "TiDB_DDL_50",
        "id": "0",
        "start_key": "7480000000000000ff3200000000000000f8",
        "end_key": "7480000000000000ff3300000000000000f8",
        "role": "leader",
        "count": 1,
        "label_constraints": [
          {
            "key": "dc",
            "op": "in",
            "values": [
              "bj"
            ]
          }
        ]
      }
    ]
}

Now we focus on the implementations of placement rules on the TiDB side.

placement.Bundle contains a placement rule group described above. Each placement.Rule in Bundle defines a rule for a specific Role. Here, only Leader is concerned. So we need to find the group by the partition id and then iterate all the rules to find the DC for Leader.

All group bundles are cached in infoSchema, which serves for querying schema. A specific bundle can be queried by calling infoSchema.BundleByName() with passing a bundle name(group name).

@Yisaer
Copy link
Contributor

Yisaer commented Nov 9, 2020

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/infoschema type/enhancement The issue or PR belongs to an enhancement.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants