You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Local transactions need to check whether the partitions they write are bound to the specific scope.
This is a part of the global/local transaction roadmap #20448.
For example:
One TiDB instance is configured with txn_scope=bj, which means the TiDB instance is located in Beijing DC.
This TiDB instance typically starts local transactions and the TSO is allocated from a PD in Beijing DC.
These transactions can only visit data whose leaders are placed on TiKV instances in Beijing DC.
In the 3rd step, we need to check the partition placement constraint, which is issued in #20827. To check this constraint, we need to obtain the DC information of partitions.
Rationale
Placement rules can define the placement of partition leaders. For example, you can put the leader of partition p0 to Beijing DC through following SQL:
ALTERTABLE user ALTER PARTITION p0
ALTER PLACEMENT POLICY ROLE=leader CONSTRAINTS='["+dc=bj"]';
CONSTRAINTS contains placement constraints which partitions must comply with. The key dc is the label key defined in TiKV configurations, and bj is the corresponding label value. For example, a TiKV instance may contain such label configuration:
[server]
labels = "dc=bj,rack=rack0,disk=hdd"
The constraint above is satisfied with this configuration, so the leader of p0 may be scheduled on this TiKV instance.
Now that placement rules have bound partitions to data centers, we can get DC information for partitions from placement rules.
Placement rules are persistent on PD but also cached in all TiDB instances. The schema version mechanism described in #20809 guarantees that each TiDB has the same copy of the placement rules.
Implementations
Placement rules are grouped by partitions. Each partition has a placement rule group, which is named after the partition id. Assuming that the partition id of p0 is 50, then the SQL above should produce such a placement rule group:
Now we focus on the implementations of placement rules on the TiDB side.
placement.Bundle contains a placement rule group described above. Each placement.Rule in Bundle defines a rule for a specific Role. Here, only Leader is concerned. So we need to find the group by the partition id and then iterate all the rules to find the DC for Leader.
All group bundles are cached in infoSchema, which serves for querying schema. A specific bundle can be queried by calling infoSchema.BundleByName() with passing a bundle name(group name).
The text was updated successfully, but these errors were encountered:
Development Task
Local transactions need to check whether the partitions they write are bound to the specific scope.
This is a part of the global/local transaction roadmap #20448.
For example:
txn_scope=bj
, which means the TiDB instance is located in Beijing DC.In the 3rd step, we need to check the partition placement constraint, which is issued in #20827. To check this constraint, we need to obtain the DC information of partitions.
Rationale
Placement rules can define the placement of partition leaders. For example, you can put the leader of partition
p0
to Beijing DC through following SQL:CONSTRAINTS
contains placement constraints which partitions must comply with. The keydc
is the label key defined in TiKV configurations, andbj
is the corresponding label value. For example, a TiKV instance may contain such label configuration:The constraint above is satisfied with this configuration, so the leader of
p0
may be scheduled on this TiKV instance.Now that placement rules have bound partitions to data centers, we can get DC information for partitions from placement rules.
Placement rules are persistent on PD but also cached in all TiDB instances. The schema version mechanism described in #20809 guarantees that each TiDB has the same copy of the placement rules.
Implementations
Placement rules are grouped by partitions. Each partition has a placement rule group, which is named after the partition id. Assuming that the partition id of
p0
is50
, then the SQL above should produce such a placement rule group:Now we focus on the implementations of placement rules on the TiDB side.
placement.Bundle
contains a placement rule group described above. Eachplacement.Rule
inBundle
defines a rule for a specificRole
. Here, onlyLeader
is concerned. So we need to find the group by the partition id and then iterate all the rules to find the DC forLeader
.All group bundles are cached in
infoSchema
, which serves for querying schema. A specific bundle can be queried by callinginfoSchema.BundleByName()
with passing a bundle name(group name).The text was updated successfully, but these errors were encountered: