Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
60485: cloudimpl: deprecation notice to GCS `default` cluster setting r=dt a=adityamaru

We want to get rid of the `default` mode of AUTH for GCS in 21.2. This
mode relies on a cluster setting being set with a JSON key. This change
adds a deprecation warning to the description of that cluster settings.
There will be a docs callout to accompany this change.

Fixes: #60433

Release note: None

60510: migrations: add migration to remove pre-19.2 FK representation r=ajwerner a=ajwerner

We've been carrying this old representation for a long time. This migration
is written as simply as I can think to do it. It could perhaps be better.

One thing this patch doesn't do is really enforce that it's gone. Probably
we'll want to do that in validation for this cycle. In the next release we'll
stop decoding the fields.

The change should not be user visible so no release note.

Release note: None

60770: colexec: extract multiple new packages r=yuzefovich a=yuzefovich

This PR breaks down huge `colexec` package in order to speed up the build time
(it was pointed out that the package is the bottleneck during bazel builds). With
all of these changes, `colexec` (and its descendants) is barely visible when profiling
the bazel build.

The structure of dependencies is enforced by `dep_test` files added to the new
packages.

The following new packages have been extracted:
- `colexecutils` which contains miscellaneous utility operators, structs,
and functions that will be used by several other packages
- `colexechash` which contains all of the code interacting directly
with hashing of data (the hash table and the tuple hash distributor)
- `colexecwindow` which contains all of the code related to window
functions
- `colexecargs` which contains the arguments to and the result of
`NewColOperator` call
- `colexecbase` which contains miscellaneous operators that have very
few dependencies and are depended on by other operators that we want
to extract from `colexec`
- `colexeccmp` which contains shared objects between projection and
selection operators (namely, LIKE ops- and default comparison- related
things)
- `colexecproj` and `colexecsel` which contain the projection and the
selection operators, respectively
- `colexecjoin` which contains the code for the in-memory joiners (cross,
hash, and merge).

Additionally, multiple other operators have been moved to more appropriate
packages as well as the following package moves and renames were performed:
- move `colexecerror` out of `sql/colexecbase` into `sql`
- rename `sql/colexecbase` to `sql/colexecop`.

See individual commits for details.

60928: sql: Enable IMPORT of tables into multi-region databases r=arulajmani,otan,pbardea a=ajstorm

Previously, tables that were exported from non-multi-region databases
were not able to be imported into multi-region databases. This commit
enables the above operation.

Note that the newly added test case also lays the groundwork for testing
export from multi-region databases.

Release note: None

Resolves #59803.

60965:  jobs: make Job.id an int64 instead of *int64 r=lucy-zhang a=lucy-zhang

It's no longer valid to create in-memory `Job`s without IDs, so there's
no reason to have the `id` field be a pointer anymore.

This commit should also banish the common mistake of logging a `*int64`
using `%d` when logging the job ID.

Release note: None

Co-authored-by: Aditya Maru <adityamaru@gmail.com>
Co-authored-by: Andrew Werner <ajwerner@cockroachlabs.com>
Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Co-authored-by: Adam Storm <storm@cockroachlabs.com>
Co-authored-by: Lucy Zhang <lucy@cockroachlabs.com>
  • Loading branch information
6 people committed Feb 24, 2021
6 parents 07bafdb + f1821cd + 82ba332 + e0a1205 + c7932c5 + f5ad15a commit 52ab049
Show file tree
Hide file tree
Showing 375 changed files with 13,497 additions and 10,963 deletions.
69 changes: 35 additions & 34 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -836,49 +836,17 @@ EXECGEN_TARGETS = \
pkg/sql/colconv/datum_to_vec.eg.go \
pkg/sql/colconv/vec_to_datum.eg.go \
pkg/sql/colexec/and_or_projection.eg.go \
pkg/sql/colexec/cast.eg.go \
pkg/sql/colexec/const.eg.go \
pkg/sql/colexec/crossjoiner.eg.go \
pkg/sql/colexec/default_cmp_expr.eg.go \
pkg/sql/colexec/default_cmp_proj_ops.eg.go \
pkg/sql/colexec/default_cmp_sel_ops.eg.go \
pkg/sql/colexec/distinct.eg.go \
pkg/sql/colexec/hashjoiner.eg.go \
pkg/sql/colexec/hashtable_distinct.eg.go \
pkg/sql/colexec/hashtable_full_default.eg.go \
pkg/sql/colexec/hashtable_full_deleting.eg.go \
pkg/sql/colexec/hash_aggregator.eg.go \
pkg/sql/colexec/hash_utils.eg.go \
pkg/sql/colexec/is_null_ops.eg.go \
pkg/sql/colexec/mergejoinbase.eg.go \
pkg/sql/colexec/mergejoiner_exceptall.eg.go \
pkg/sql/colexec/mergejoiner_fullouter.eg.go \
pkg/sql/colexec/mergejoiner_inner.eg.go \
pkg/sql/colexec/mergejoiner_intersectall.eg.go \
pkg/sql/colexec/mergejoiner_leftanti.eg.go \
pkg/sql/colexec/mergejoiner_leftouter.eg.go \
pkg/sql/colexec/mergejoiner_leftsemi.eg.go \
pkg/sql/colexec/mergejoiner_rightanti.eg.go \
pkg/sql/colexec/mergejoiner_rightouter.eg.go \
pkg/sql/colexec/mergejoiner_rightsemi.eg.go \
pkg/sql/colexec/ordered_synchronizer.eg.go \
pkg/sql/colexec/proj_const_left_ops.eg.go \
pkg/sql/colexec/proj_const_right_ops.eg.go \
pkg/sql/colexec/proj_like_ops.eg.go \
pkg/sql/colexec/proj_non_const_ops.eg.go \
pkg/sql/colexec/quicksort.eg.go \
pkg/sql/colexec/rank.eg.go \
pkg/sql/colexec/relative_rank.eg.go \
pkg/sql/colexec/row_number.eg.go \
pkg/sql/colexec/rowstovec.eg.go \
pkg/sql/colexec/selection_ops.eg.go \
pkg/sql/colexec/select_in.eg.go \
pkg/sql/colexec/sel_like_ops.eg.go \
pkg/sql/colexec/sort.eg.go \
pkg/sql/colexec/sort_partitioner.eg.go \
pkg/sql/colexec/substring.eg.go \
pkg/sql/colexec/values_differ.eg.go \
pkg/sql/colexec/vec_comparators.eg.go \
pkg/sql/colexec/window_peer_grouper.eg.go \
pkg/sql/colexec/colexecagg/hash_any_not_null_agg.eg.go \
pkg/sql/colexec/colexecagg/hash_avg_agg.eg.go \
pkg/sql/colexec/colexecagg/hash_bool_and_or_agg.eg.go \
Expand All @@ -896,7 +864,40 @@ EXECGEN_TARGETS = \
pkg/sql/colexec/colexecagg/ordered_default_agg.eg.go \
pkg/sql/colexec/colexecagg/ordered_min_max_agg.eg.go \
pkg/sql/colexec/colexecagg/ordered_sum_agg.eg.go \
pkg/sql/colexec/colexecagg/ordered_sum_int_agg.eg.go
pkg/sql/colexec/colexecagg/ordered_sum_int_agg.eg.go \
pkg/sql/colexec/colexecbase/cast.eg.go \
pkg/sql/colexec/colexecbase/const.eg.go \
pkg/sql/colexec/colexecbase/distinct.eg.go \
pkg/sql/colexec/colexeccmp/default_cmp_expr.eg.go \
pkg/sql/colexec/colexechash/hashtable_distinct.eg.go \
pkg/sql/colexec/colexechash/hashtable_full_default.eg.go \
pkg/sql/colexec/colexechash/hashtable_full_deleting.eg.go \
pkg/sql/colexec/colexechash/hash_utils.eg.go \
pkg/sql/colexec/colexecjoin/crossjoiner.eg.go \
pkg/sql/colexec/colexecjoin/hashjoiner.eg.go \
pkg/sql/colexec/colexecjoin/mergejoinbase.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_exceptall.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_fullouter.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_inner.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_intersectall.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_leftanti.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_leftouter.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_leftsemi.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_rightanti.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_rightouter.eg.go \
pkg/sql/colexec/colexecjoin/mergejoiner_rightsemi.eg.go \
pkg/sql/colexec/colexecproj/default_cmp_proj_ops.eg.go \
pkg/sql/colexec/colexecproj/proj_const_left_ops.eg.go \
pkg/sql/colexec/colexecproj/proj_const_right_ops.eg.go \
pkg/sql/colexec/colexecproj/proj_like_ops.eg.go \
pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go \
pkg/sql/colexec/colexecsel/default_cmp_sel_ops.eg.go \
pkg/sql/colexec/colexecsel/selection_ops.eg.go \
pkg/sql/colexec/colexecsel/sel_like_ops.eg.go \
pkg/sql/colexec/colexecwindow/rank.eg.go \
pkg/sql/colexec/colexecwindow/relative_rank.eg.go \
pkg/sql/colexec/colexecwindow/row_number.eg.go \
pkg/sql/colexec/colexecwindow/window_peer_grouper.eg.go

OPTGEN_TARGETS = \
pkg/sql/opt/memo/expr.og.go \
Expand Down
4 changes: 2 additions & 2 deletions docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Setting Type Default Description
bulkio.stream_ingestion.minimum_flush_interval duration 5s the minimum timestamp between flushes; flushes may still occur if internal buffers fill up
cloudstorage.gs.default.key string if set, JSON key to use during Google Cloud Storage operations
cloudstorage.gs.default.key string [deprecated] if set, JSON key to use during Google Cloud Storage operations. This setting will be removed in 21.2, as we will no longer support the `default` AUTH mode for GCS operations.
cloudstorage.http.custom_ca string custom root CA (appended to system's default CAs) for verifying certificates when interacting with HTTPS storage
cloudstorage.timeout duration 10m0s the timeout for import/export storage operations
cluster.organization string organization name
Expand Down Expand Up @@ -99,4 +99,4 @@ timeseries.storage.resolution_30m.ttl duration 2160h0m0s the maximum age of time
trace.debug.enable boolean false if set, traces for recent requests can be seen at https://<ui>/debug/requests
trace.lightstep.token string if set, traces go to Lightstep using this token
trace.zipkin.collector string if set, traces go to the given Zipkin instance (example: '127.0.0.1:9411'); ignored if trace.lightstep.token is set
version version 20.2-40 set the active cluster version in the format '<major>.<minor>'
version version 20.2-42 set the active cluster version in the format '<major>.<minor>'
4 changes: 2 additions & 2 deletions docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<thead><tr><th>Setting</th><th>Type</th><th>Default</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>bulkio.stream_ingestion.minimum_flush_interval</code></td><td>duration</td><td><code>5s</code></td><td>the minimum timestamp between flushes; flushes may still occur if internal buffers fill up</td></tr>
<tr><td><code>cloudstorage.gs.default.key</code></td><td>string</td><td><code></code></td><td>if set, JSON key to use during Google Cloud Storage operations</td></tr>
<tr><td><code>cloudstorage.gs.default.key</code></td><td>string</td><td><code></code></td><td>[deprecated] if set, JSON key to use during Google Cloud Storage operations. This setting will be removed in 21.2, as we will no longer support the `default` AUTH mode for GCS operations.</td></tr>
<tr><td><code>cloudstorage.http.custom_ca</code></td><td>string</td><td><code></code></td><td>custom root CA (appended to system's default CAs) for verifying certificates when interacting with HTTPS storage</td></tr>
<tr><td><code>cloudstorage.timeout</code></td><td>duration</td><td><code>10m0s</code></td><td>the timeout for import/export storage operations</td></tr>
<tr><td><code>cluster.organization</code></td><td>string</td><td><code></code></td><td>organization name</td></tr>
Expand Down Expand Up @@ -101,6 +101,6 @@
<tr><td><code>trace.debug.enable</code></td><td>boolean</td><td><code>false</code></td><td>if set, traces for recent requests can be seen at https://<ui>/debug/requests</td></tr>
<tr><td><code>trace.lightstep.token</code></td><td>string</td><td><code></code></td><td>if set, traces go to Lightstep using this token</td></tr>
<tr><td><code>trace.zipkin.collector</code></td><td>string</td><td><code></code></td><td>if set, traces go to the given Zipkin instance (example: '127.0.0.1:9411'); ignored if trace.lightstep.token is set</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>20.2-40</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>20.2-42</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
</tbody>
</table>
14 changes: 12 additions & 2 deletions pkg/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -157,11 +157,21 @@ ALL_TESTS = [
"//pkg/sql/colcontainer:colcontainer_test",
"//pkg/sql/colencoding:colencoding_test",
"//pkg/sql/colexec/colbuilder:colbuilder_test",
"//pkg/sql/colexec/colexecagg:colexecagg_test",
"//pkg/sql/colexec/colexecargs:colexecargs_test",
"//pkg/sql/colexec/colexecbase:colexecbase_test",
"//pkg/sql/colexec/colexeccmp:colexeccmp_test",
"//pkg/sql/colexec/colexechash:colexechash_test",
"//pkg/sql/colexec/colexecjoin:colexecjoin_test",
"//pkg/sql/colexec/colexecproj:colexecproj_test",
"//pkg/sql/colexec/colexecsel:colexecsel_test",
"//pkg/sql/colexec/colexectestutils:colexectestutils_test",
"//pkg/sql/colexec/colexecutils:colexecutils_test",
"//pkg/sql/colexec/colexecwindow:colexecwindow_test",
"//pkg/sql/colexec/execgen:execgen_test",
"//pkg/sql/colexec:colexec_test",
"//pkg/sql/colexecbase/colexecerror:colexecerror_test",
"//pkg/sql/colexecbase:colexecbase_test",
"//pkg/sql/colexecerror:colexecerror_test",
"//pkg/sql/colexecop:colexecop_test",
"//pkg/sql/colflow/colrpc:colrpc_test",
"//pkg/sql/colflow:colflow_test",
"//pkg/sql/colmem:colmem_test",
Expand Down
10 changes: 5 additions & 5 deletions pkg/ccl/backupccl/backup_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -535,7 +535,7 @@ func (b *backupResumer) ReportResults(ctx context.Context, resultsCh chan<- tree
case <-ctx.Done():
return ctx.Err()
case resultsCh <- tree.Datums{
tree.NewDInt(tree.DInt(*b.job.ID())),
tree.NewDInt(tree.DInt(b.job.ID())),
tree.NewDString(string(jobs.StatusSucceeded)),
tree.NewDFloat(tree.DFloat(1.0)),
tree.NewDInt(tree.DInt(b.backupStats.Rows)),
Expand Down Expand Up @@ -564,7 +564,7 @@ func (b *backupResumer) readManifestOnResume(
return nil, errors.Wrapf(err, "reading backup checkpoint")
}
// Try reading temp checkpoint.
tmpCheckpoint := tempCheckpointFileNameForJob(*b.job.ID())
tmpCheckpoint := tempCheckpointFileNameForJob(b.job.ID())
desc, err = readBackupManifest(ctx, defaultStore, tmpCheckpoint, details.EncryptionOptions)
if err != nil {
return nil, err
Expand Down Expand Up @@ -610,7 +610,7 @@ func (b *backupResumer) maybeNotifyScheduledJobCompletion(
fmt.Sprintf(
"SELECT created_by_id FROM %s WHERE id=$1 AND created_by_type=$2",
env.SystemJobsTableName()),
*b.job.ID(), jobs.CreatedByScheduledJobs)
b.job.ID(), jobs.CreatedByScheduledJobs)

if err != nil {
return errors.Wrap(err, "schedule info lookup")
Expand All @@ -622,10 +622,10 @@ func (b *backupResumer) maybeNotifyScheduledJobCompletion(

scheduleID := int64(tree.MustBeDInt(datums[0]))
if err := jobs.NotifyJobTermination(
ctx, env, *b.job.ID(), jobStatus, b.job.Details(), scheduleID, exec.InternalExecutor, txn); err != nil {
ctx, env, b.job.ID(), jobStatus, b.job.Details(), scheduleID, exec.InternalExecutor, txn); err != nil {
log.Warningf(ctx,
"failed to notify schedule %d of completion of job %d; err=%s",
scheduleID, *b.job.ID(), err)
scheduleID, b.job.ID(), err)
}
return nil
}); err != nil {
Expand Down
20 changes: 19 additions & 1 deletion pkg/ccl/backupccl/restore_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -363,6 +363,24 @@ func WriteDescriptors(
table.GetID(), table)
}
}

// If the table descriptor is being written to a multi-region database and
// the table does not have a locality config setup, set one up here. The
// table's locality config will be set to the default locality - REGIONAL
// BY TABLE IN PRIMARY REGION.
_, dbDesc, err := descsCol.GetImmutableDatabaseByID(
ctx, txn, table.GetParentID(), tree.DatabaseLookupFlags{
Required: true,
AvoidCached: true,
IncludeOffline: true,
})
if err != nil {
return err
}
if dbDesc.GetRegionConfig() != nil && table.GetLocalityConfig() == nil {
table.(*tabledesc.Mutable).SetTableLocalityRegionalByTable(tree.PrimaryRegionLocalityName)
}

if err := descsCol.WriteDescToBatch(
ctx, false /* kvTrace */, tables[i].(catalog.MutableDescriptor), b,
); err != nil {
Expand Down Expand Up @@ -1441,7 +1459,7 @@ func (r *restoreResumer) ReportResults(ctx context.Context, resultsCh chan<- tre
case <-ctx.Done():
return ctx.Err()
case resultsCh <- tree.Datums{
tree.NewDInt(tree.DInt(*r.job.ID())),
tree.NewDInt(tree.DInt(r.job.ID())),
tree.NewDString(string(jobs.StatusSucceeded)),
tree.NewDFloat(tree.DFloat(1.0)),
tree.NewDInt(tree.DInt(r.restoreStats.Rows)),
Expand Down
4 changes: 2 additions & 2 deletions pkg/ccl/changefeedccl/changefeed_stmt.go
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,7 @@ func generateChangefeedSessionID() string {
func (b *changefeedResumer) Resume(ctx context.Context, execCtx interface{}) error {
jobExec := execCtx.(sql.JobExecContext)
execCfg := jobExec.ExecCfg()
jobID := *b.job.ID()
jobID := b.job.ID()
details := b.job.Details().(jobspb.ChangefeedDetails)
progress := b.job.Progress()

Expand Down Expand Up @@ -679,7 +679,7 @@ func (b *changefeedResumer) OnPauseRequest(

execCfg := jobExec.(sql.JobExecContext).ExecCfg()
pts := execCfg.ProtectedTimestampProvider
return createProtectedTimestampRecord(ctx, execCfg.Codec, pts, txn, *b.job.ID(),
return createProtectedTimestampRecord(ctx, execCfg.Codec, pts, txn, b.job.ID(),
details.Targets, *resolved, cp)
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/ccl/importccl/import_processor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -549,7 +549,7 @@ type cancellableImportResumer struct {
}

func (r *cancellableImportResumer) Resume(ctx context.Context, execCtx interface{}) error {
r.jobID = *r.wrapped.job.ID()
r.jobID = r.wrapped.job.ID()
r.jobIDCh <- r.jobID
if err := r.wrapped.Resume(r.ctx, execCtx); err != nil {
return err
Expand Down
4 changes: 2 additions & 2 deletions pkg/ccl/importccl/import_stmt.go
Original file line number Diff line number Diff line change
Expand Up @@ -1304,7 +1304,7 @@ func (r *importResumer) prepareTableDescsForIngestion(
func (r *importResumer) ReportResults(ctx context.Context, resultsCh chan<- tree.Datums) error {
select {
case resultsCh <- tree.Datums{
tree.NewDInt(tree.DInt(*r.job.ID())),
tree.NewDInt(tree.DInt(r.job.ID())),
tree.NewDString(string(jobs.StatusSucceeded)),
tree.NewDFloat(tree.DFloat(1.0)),
tree.NewDInt(tree.DInt(r.res.Rows)),
Expand Down Expand Up @@ -1505,7 +1505,7 @@ func (r *importResumer) parseBundleSchemaIfNeeded(ctx context.Context, phs inter
if err := r.job.RunningStatus(ctx, nil /* txn */, func(_ context.Context, _ jobspb.Details) (jobs.RunningStatus, error) {
return runningStatusImportBundleParseSchema, nil
}); err != nil {
return errors.Wrapf(err, "failed to update running status of job %d", errors.Safe(*r.job.ID()))
return errors.Wrapf(err, "failed to update running status of job %d", errors.Safe(r.job.ID()))
}

var tableDescs []*tabledesc.Mutable
Expand Down
8 changes: 4 additions & 4 deletions pkg/ccl/importccl/import_stmt_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4752,7 +4752,7 @@ func TestImportControlJobRBAC(t *testing.T) {
rootJob := startLeasedJob(t, rootJobRecord)

// Test root can control root job.
rootDB.Exec(t, tc.controlQuery, *rootJob.ID())
rootDB.Exec(t, tc.controlQuery, rootJob.ID())
require.NoError(t, err)

// Start import job as non-admin user.
Expand All @@ -4761,7 +4761,7 @@ func TestImportControlJobRBAC(t *testing.T) {
userJob := startLeasedJob(t, nonAdminJobRecord)

// Test testuser can control testuser job.
_, err := testuser.Exec(tc.controlQuery, *userJob.ID())
_, err := testuser.Exec(tc.controlQuery, userJob.ID())
require.NoError(t, err)

// Start second import job as root.
Expand All @@ -4771,11 +4771,11 @@ func TestImportControlJobRBAC(t *testing.T) {
userJob2 := startLeasedJob(t, nonAdminJobRecord)

// Test root can control testuser job.
rootDB.Exec(t, tc.controlQuery, *userJob2.ID())
rootDB.Exec(t, tc.controlQuery, userJob2.ID())
require.NoError(t, err)

// Test testuser CANNOT control root job.
_, err = testuser.Exec(tc.controlQuery, *rootJob2.ID())
_, err = testuser.Exec(tc.controlQuery, rootJob2.ID())
require.True(t, testutils.IsError(err, "only admins can control jobs owned by other admins"))
})
}
Expand Down
100 changes: 100 additions & 0 deletions pkg/ccl/logictestccl/testdata/logic_test/multi_region_import_export
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# LogicTest: multiregion-9node-3region-3azs

query TTTT colnames
SHOW REGIONS
----
region zones database_names primary_region_of
ap-southeast-2 {ap-az1,ap-az2,ap-az3} {} {}
ca-central-1 {ca-az1,ca-az2,ca-az3} {} {}
us-east-1 {us-az1,us-az2,us-az3} {} {}

query TT colnames
SHOW REGIONS FROM CLUSTER
----
region zones
ap-southeast-2 {ap-az1,ap-az2,ap-az3}
ca-central-1 {ca-az1,ca-az2,ca-az3}
us-east-1 {us-az1,us-az2,us-az3}

statement ok
CREATE DATABASE non_multi_region_db

statement ok
CREATE DATABASE multi_region_test_db PRIMARY REGION "ca-central-1" REGIONS "ap-southeast-2", "us-east-1" SURVIVE REGION FAILURE

statement ok
USE multi_region_test_db;
CREATE TABLE regional_primary_region_table (a int) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION

statement ok
CREATE TABLE "regional_us-east-1_table" (a int) LOCALITY REGIONAL BY TABLE IN "us-east-1"

statement ok
CREATE TABLE global_table (a int) LOCALITY GLOBAL

statement ok
CREATE TABLE regional_by_row_table (
pk int PRIMARY KEY,
pk2 int NOT NULL,
a int NOT NULL,
b int NOT NULL,
j JSON,
INDEX (a),
UNIQUE (b),
INVERTED INDEX (j),
FAMILY (pk, pk2, a, b)
) LOCALITY REGIONAL BY ROW

statement ok
use non_multi_region_db

statement ok
CREATE TABLE team (
id int PRIMARY KEY,
name string,
likes string[],
dislikes string[]
)

statement ok
INSERT INTO team VALUES (1, 'arulajmani', ARRAY['turkey','coffee','ps5'], ARRAY['going outside in winter','denormalization']);
INSERT INTO team VALUES (2, 'otan', ARRAY['Sydney suburbs','cricket','vim'], ARRAY['flaky tests','onboarding'])

query ITTT colnames
SELECT * FROM team
----
id name likes dislikes
1 arulajmani {turkey,coffee,ps5} {"going outside in winter",denormalization}
2 otan {"Sydney suburbs",cricket,vim} {"flaky tests",onboarding}

statement ok
EXPORT INTO CSV 'nodelocal://1/team_export/' WITH DELIMITER = '|' FROM TABLE team

statement ok
use multi_region_test_db;
IMPORT TABLE team (
id int PRIMARY KEY,
name string,
likes string[],
dislikes string[]
)
CSV DATA ('nodelocal://1/team_export/export*.csv') WITH DELIMITER = '|'

query ITTT colnames
SELECT * FROM team
----
id name likes dislikes
1 arulajmani {turkey,coffee,ps5} {"going outside in winter",denormalization}
2 otan {"Sydney suburbs",cricket,vim} {"flaky tests",onboarding}

query TT
SHOW CREATE TABLE team
----
team CREATE TABLE public.team (
id INT8 NOT NULL,
name STRING NULL,
likes STRING[] NULL,
dislikes STRING[] NULL,
CONSTRAINT "primary" PRIMARY KEY (id ASC),
FAMILY "primary" (id, name, likes, dislikes)
) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION
Loading

0 comments on commit 52ab049

Please sign in to comment.