Skip to content

Latest commit

 

History

History
145 lines (105 loc) · 7.69 KB

regional-tables.md

File metadata and controls

145 lines (105 loc) · 7.69 KB
title summary toc redirect_from
The Regional Table Locality Pattern
Guidance on using the Regional Table Locality Pattern in a multi-region deployment.
true
topology-geo-partitioned-replicas.html
topology-geo-partitioned-leaseholders.html

In a multi-region deployment, the Regional Table Locality Pattern is a good choice for tables with the following requirements:

  • Read and write latency must be low.
  • Rows in the table, and all latency-sensitive queries, can be tied to specific geographies, e.g., city, state, region.

{{site.data.alerts.callout_info}} Tables with the Regional Table Locality Pattern can survive zone or region failures, depending on the database-level survival goal setting. {{site.data.alerts.end}}

Prerequisites

Fundamentals

{% include {{ page.version.version }}/topology-patterns/multiregion-fundamentals.md %}

Cluster setup

{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %}

Configuration

{{site.data.alerts.callout_info}} Regional tables (and the other multi-region capabilities) require an Enterprise license. {{site.data.alerts.end}}

Summary

Using this pattern, you tell CockroachDB to set the table locality to either REGIONAL BY TABLE or REGIONAL BY ROW, and CockroachDB handles the details.

Regional tables

{% include {{page.version.version}}/sql/regional-table-description.md %}

Regional by row tables

{% include {{page.version.version}}/sql/regional-by-row-table-description.md %}

Steps

{% include {{page.version.version}}/topology-patterns/multiregion-db-setup.md %}

Next, create a users table:

{% include copy-clipboard.html %}

CREATE TABLE users (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    city STRING NOT NULL,
    first_name STRING NOT NULL,
    last_name STRING NOT NULL,
    address STRING NOT NULL
);

Since all tables in a multi-region cluster default to the REGIONAL BY TABLE locality setting, let's set the table's locality to REGIONAL BY ROW using the following ALTER TABLE statements: ADD COLUMN, ALTER COLUMN, and SET LOCALITY.

{% include copy-clipboard.html %}

ALTER TABLE users ADD COLUMN region crdb_internal_region AS (
  CASE WHEN city = 'milwaukee' THEN 'us-central'
       WHEN city = 'chicago' THEN 'us-central'
       WHEN city = 'dallas' THEN 'us-central'
       WHEN city = 'new york' THEN 'us-east'
       WHEN city = 'boston' THEN 'us-east'
       WHEN city = 'washington dc' THEN 'us-east'
       WHEN city = 'san francisco' THEN 'us-west'
       WHEN city = 'seattle' THEN 'us-west'
       WHEN city = 'los angeles' THEN 'us-west'
  END
) STORED;
ALTER TABLE users ALTER COLUMN region SET NOT NULL;
ALTER TABLE users SET LOCALITY REGIONAL BY ROW AS "region";

{{site.data.alerts.callout_info}} The example above shows how to use a computed column to set the region column's name and value in a regional table. For more information on different ways to set the name and value of the region column, see ALTER TABLE ... SET LOCALITY. {{site.data.alerts.end}}

To confirm that your users table data is replicated across the cluster in accordance with the REGIONAL BY ROW table locality, check the Data Distribution debug page in the DB Console. It will look something like the output below (which is edited for length). Translating from zone configurations into human language, this output says:

ALTER DATABASE test CONFIGURE ZONE USING
    num_replicas = 5,
    num_voters = 3,
    constraints = '{+region=us-central: 1, +region=us-east: 1, +region=us-west: 1}',
    voter_constraints = '[+region=us-east]',
    lease_preferences = '[[+region=us-east]]'
...
ALTER PARTITION "europe-west1" OF INDEX test.public.users@primary CONFIGURE ZONE USING
	num_voters = 3,
	voter_constraints = '[+region=europe-west1]',
	lease_preferences = '[[+region=europe-west1]]'
ALTER PARTITION "us-east1" OF INDEX test.public.users@primary CONFIGURE ZONE USING
	num_voters = 3,
	voter_constraints = '[+region=us-east1]',
	lease_preferences = '[[+region=us-east1]]'
ALTER PARTITION "us-west1" OF INDEX test.public.users@primary CONFIGURE ZONE USING
	num_voters = 3,
	voter_constraints = '[+region=us-west1]',
	lease_preferences = '[[+region=us-west1]]'

{{site.data.alerts.callout_success}} A better way to check that your table locality settings are having the expected effect is by monitoring how the performance metrics of a workload change as the settings are applied to a running cluster. For a tutorial showing how to use table localities to improve performance metrics across a multi-region cluster, see Low Latency Reads and Writes in a Multi-Region Cluster. {{site.data.alerts.end}}

Characteristics

Latency

For REGIONAL BY TABLE tables, you get low latency for single-region writes and multi-region stale reads.

For REGIONAL BY ROW tables, you get low-latency consistent multi-region reads & writes for rows which are homed in specific regions.

Resiliency

Because the test database does not specify a survival goal, it uses the default ZONE survival goal. With the default settings, an entire AZ can fail without interrupting access to the database.

For more information about how to choose a database survival goal, see When to use ZONE vs. REGION survival goals.

Alternatives

  • If rows in the table cannot be tied to specific geographies, and reads must be up-to-date for business reasons or because the table is referenced by foreign keys, consider the GLOBAL Table Locality Pattern.
  • If your application can tolerate historical reads in some cases, consider the Follower Reads pattern.

Tutorial

For a step-by-step demonstration showing how CockroachDB's multi-region capabilities (including REGIONAL BY ROW tables) give you low-latency reads in a distributed cluster, see the tutorial on Low Latency Reads and Writes in a Multi-Region Cluster.

See also

{% include {{ page.version.version }}/topology-patterns/see-also.md %}