Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…107752 #107802 #107803

106508: util/must: add runtime assertion API r=erikgrinaker a=erikgrinaker

For details and usage examples, see the [package documentation](https://github.com/erikgrinaker/cockroach/blob/must/pkg/util/must/must.go).

---

This patch adds a convenient and canonical API for runtime assertions, inspired by the Testify package used for Go test assertions. It is intended to encourage liberal use of runtime assertions throughout the code base, by making it as easy as possible to write assertions that follow best practices. It does not attempt to reinvent the wheel, but instead builds on existing infrastructure.

Assertion failures are fatal in all non-release builds, including roachprod clusters and roachtests, to ensure they will be noticed. In release builds, they instead log the failure and report it to Sentry (if enabled), and return an assertion error to the caller for propagation. This avoids excessive disruption in production environments, where an assertion failure is often scoped to an individual RPC request, transaction, or range, and crashing the node can turn a minor problem into a full-blown outage. It is still possible to kill the node when appropriate via `log.Fatalf`, but this should be the exception rather than the norm.

It also supports expensive assertions that must be compiled out of normal dev/test/release builds for performance reasons. These are instead enabled in special test builds.

This is intended to be used instead of other existing assertion mechanisms, which have various shortcomings:

* `log.Fatalf`: kills the node even in release builds, which can cause severe disruption over often minor issues.

* `errors.AssertionFailedf`: only suitable when we have an error return path, does not fatal in non-release builds, and are not always notified in release builds.

* `logcrash.ReportOrPanic`: panics rather than fatals, which can leave the node limping along. Requires the caller to implement separate assertion handling in release builds, which is easy to forget. Also requires propagating cluster settings, which aren't always available.

* `buildutil.CrdbTestBuild`: only enabled in Go tests, not roachtests, roachprod clusters, or production clusters.

* `util.RaceEnabled`: only enabled in race builds. Expensive assertions should be possible to run without the additional overhead of the race detector.

For more details and examples, see the `must` package documentation.

Resolves #94986.
Epic: none
Release note: None

107094: streamingest: unskip TestTenantStreamingUnavailableStreamAddress r=lidorcarmel a=lidorcarmel

Changing a few things to get this test to pass under stress:
- use 50 ranges instead of 10, because there are already 50-ish system ranges,
  so if we write only 10 more ranges those might not get distributed on all
  servers.
- avoid reading from the source cluster after stopping a node, it's flaky,
  see #107499 for more info.

Epic: none
Fixes: #107023
Fixes: #106865

Release note: None

107717: server/profiler: remove `server.cpu_profile.enabled` setting r=xinhaoz a=xinhaoz

Cpu profiling can be enabled by setting the cluster setting `server.cpu_profile.cpu_usage_combined_threshold`. This makes `server.cpu_profile.enabled` redundant and makes it more difficult and confusing to enable cpu profiling. This commit removes the `server.cpu_profile.enabled` setting entirely. Note that both jdefault values for the cluster settings set profiling off.

Closes: #102024

Release note (sql change): The cluster setting
`server.cpu_profile.enabled` has been removed.
`server.cpu_profile.cpu_usage_combined_threshold` can enable and disable cpu profiling.

107720: cli: add probe_range in debug.zip r=j82w a=j82w

PR #79546 introduces `crdb_internal.probe_range`. This PR adds the `crdb_internal.probe_range` to the debug.zip. The LIMIT gives a very approximately ~1000ms*100 target on how long this can take, so that running debug.zip against an unavailable cluster won't take too long.

closes: #80360

Release note (cli change): The debug.zip now includes the `crdb_internal.probe_range` table with a limit of 100 rows to avoid the query from taking to long.

107727: server: deflake TestServerShutdownReleasesSession r=rafiss a=rafiss

The tenant was not being fully stopped, so the test could encounter flakes.

fixes #107592
Release note: None

107742: ui: show txn fingerprint details page with unspecified app r=xinhaoz a=xinhaoz

Previously, when the app was not specified in the url search params for the txn details fingerprint page, the page would fail to load. This commit allows the page to load when there is no app specified but a fingerprint id that matches the requested page in the payload. The first matching fingerprint id is loaded.

Additionally, the TransactionDetailsLink will not include the appNames search param unless the provided prop is non-nullish.

Fixes: #107731

Release note (bug fix): Txn fingerprint details page in the console UI should load with the fingerprint details even if no app is specified in the URL.




Demo:
https://www.loom.com/share/810308d3dcd74ca888c42287ebafaecf

107745: kvserver: fix test merge queue when grunning unsupported r=irfansharif a=kvoli

`TestMergeQueue/load-based-merging/switch...below-threshold` asserts that switching the split objective between CPU and QPS will not cause ranges to merge, even if their pre-switch load qualified them for merging.

This test was broken when `grunning` was unsupported, as the objective never actually switches to anything other than QPS.

Add a check for `grunning` support, and assert that a merge occurs if unsupported.

Fixes: #106937
Epic: none
Release note: None

107749: opt: add enable_durable_locking_for_serializable session variable r=DrewKimball,nvanbenschoten a=michae2

Follow-up from #105857

This commit ammends 6a3e43d to add a session variable to control whether guaranteed-durable locks are used under serializable isolation.

Informs: #100194

Epic: CRDB-25322

Release note (sql change): Add a new session variable, `enable_durable_locking_for_serializable`, which controls locking durability under serializable isolation. With this set to true, SELECT FOR UPDATE locks, SELECT FOR SHARED locks, and constraint check locks (e.g. locks acquired during foreign key checks if
`enable_implicit_fk_locking_for_serializable` is set to true) will be guaranteed-durable under serializable isolation, meaning they will always be held to transaction commit. (These locks are always guaranteed-durable under weaker isolation levels.)

By default, under serializable isolation these locks are best-effort rather than guaranteed-durable, meaning in some cases (e.g. leaseholder transfer, node loss, etc.) they could be released before transaction commit. Serializable isolation does not rely on locking for correctness, only using it to improve performance under contention, so this default is a deliberate choice to avoid the performance overhead of lock replication.

107752: changefeedccl: prevent deadlock in TestChangefeedKafkaMessageTooLarge r=miretskiy a=jayshrivastava

Previously, this test would deadlock due to kafka retrying messages too many times. These messages are stored in a buffer of size 1024 created by the CDC testing infra: https://github.com/cockroachdb/cockroach/blob/5c3f96d38cdc3a2d953ca3ffb1e39e97d7e5110e/pkg/ccl/changefeedccl/testfeed_test.go#L1819

The test asserts that 2000 messages pass through the buffer. When the test finishes, it stops reading from the buffer. The problem is that due to retries, there may be more messages sent to the buffer than that are read out of the buffer. Even after the 2000 messages are read and the test is shutting down, the sink may be blocked trying to put resolved messages (plus retries) in the buffer. If this happens, the changefeed resumer (same goroutine as the kafka sink) gets blocked and does not terminate when the job is cancelled at the end of the test.

This change caps the number of retries at 200 for this test, so there should be no more than 200 extra messages plus a few resolved messages during this test. This is far less than the buffer size of 1024.

See detailed explanation in #107591.

Fixes: #107591
Epic: none
Release note: None

107802: teamcity-trigger: don't start a job for an empty target r=healthy-pod a=rickystewart

This makes no sense, so skip these cases.

Closes: #107779
Closes: #107780
Closes: #107781

Epic: none
Release note: None

107803: githubpost: set `map` field if `null` r=healthy-pod a=rickystewart

Go is a really good language.

Informs: #107779

Epic: none
Release note: None

Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com>
Co-authored-by: Lidor Carmel <lidor@cockroachlabs.com>
Co-authored-by: Xin Hao Zhang <xzhang@cockroachlabs.com>
Co-authored-by: j82w <jwilley@cockroachlabs.com>
Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com>
Co-authored-by: Austen McClernon <austen@cockroachlabs.com>
Co-authored-by: Michael Erickson <michae2@cockroachlabs.com>
Co-authored-by: Jayant Shrivastava <jayants@cockroachlabs.com>
Co-authored-by: Ricky Stewart <ricky@cockroachlabs.com>
  • Loading branch information
10 people committed Jul 28, 2023
12 parents 167da65 + 0a1699d + ce27952 + 18b6a6a + 8cdb073 + 9cde0a6 + 31e9b59 + 44480bb + dd65d45 + 4f1e340 + 7792db1 + 207991c commit 04c91a5
Show file tree
Hide file tree
Showing 79 changed files with 1,616 additions and 176 deletions.
12 changes: 6 additions & 6 deletions DEPS.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -11160,10 +11160,10 @@ def go_deps():
name = "org_golang_x_exp",
build_file_proto_mode = "disable_global",
importpath = "golang.org/x/exp",
sha256 = "d22b8623bef1ead959b1c6449512dfdbf9624e1ec6c4bc443e91eee170d4d7af",
strip_prefix = "golang.org/x/exp@v0.0.0-20220827204233-334a2380cb91",
sha256 = "af32025a065aa599a3e5b01048602a53e2b6e3938b12d33fa2a5f057be9759fa",
strip_prefix = "golang.org/x/exp@v0.0.0-20230626212559-97b1e661b5df",
urls = [
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/exp/org_golang_x_exp-v0.0.0-20220827204233-334a2380cb91.zip",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/exp/org_golang_x_exp-v0.0.0-20230626212559-97b1e661b5df.zip",
],
)
go_repository(
Expand Down Expand Up @@ -11210,10 +11210,10 @@ def go_deps():
name = "org_golang_x_mod",
build_file_proto_mode = "disable_global",
importpath = "golang.org/x/mod",
sha256 = "ad0b72a29c07f57d92612fdbc2da6891b3988dd98b31d2fdcbe8ddb9c8cc2d52",
strip_prefix = "golang.org/x/mod@v0.9.0",
sha256 = "364198930cc7f46ba5bb1c0987d089a557aa0b406f8efec0490744a454df00a5",
strip_prefix = "golang.org/x/mod@v0.11.0",
urls = [
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/mod/org_golang_x_mod-v0.9.0.zip",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/mod/org_golang_x_mod-v0.11.0.zip",
],
)
go_repository(
Expand Down
4 changes: 2 additions & 2 deletions build/bazelutil/distdir_files.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -1051,12 +1051,12 @@ DISTDIR_FILES = {
"https://storage.googleapis.com/cockroach-godeps/gomod/goji.io/io_goji-v2.0.2+incompatible.zip": "1ea69b28e356cb91381ce2339004fcf144ad1b268c9e3497c9ef304751ae0bb3",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/arch/org_golang_x_arch-v0.0.0-20180920145803-b19384d3c130.zip": "9f67b677a3fefc503111d9aa7df8bacd2677411b0fcb982eb1654aa6d14cc3f8",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/crypto/org_golang_x_crypto-v0.9.0.zip": "672ebc916740a040d5f5472b477e1d1898e06b0c6c0a5a820b65495bbb133e82",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/exp/org_golang_x_exp-v0.0.0-20220827204233-334a2380cb91.zip": "d22b8623bef1ead959b1c6449512dfdbf9624e1ec6c4bc443e91eee170d4d7af",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/exp/org_golang_x_exp-v0.0.0-20230626212559-97b1e661b5df.zip": "af32025a065aa599a3e5b01048602a53e2b6e3938b12d33fa2a5f057be9759fa",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/exp/typeparams/org_golang_x_exp_typeparams-v0.0.0-20221208152030-732eee02a75a.zip": "9bd73f186851c6229484f486981f608d16e2b86acbbef6f4f7cc0480a508a4a4",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/image/org_golang_x_image-v0.0.0-20210628002857-a66eb6448b8d.zip": "70cf423fad9be160a88fbf01bc1897efd888f915a6d7ba0dd41ca7085f75e06e",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/lint/org_golang_x_lint-v0.0.0-20210508222113-6edffad5e616.zip": "0a4a5ebd2b1d79e7f480cbf5a54b45a257ae1ec9d11f01688efc5c35268d4603",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/mobile/org_golang_x_mobile-v0.0.0-20190719004257-d2bd2a29d028.zip": "6b946c7da47acf3b6195336fd071bfc73d543cefab73f2d27528c5dc1dc829ec",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/mod/org_golang_x_mod-v0.9.0.zip": "ad0b72a29c07f57d92612fdbc2da6891b3988dd98b31d2fdcbe8ddb9c8cc2d52",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/mod/org_golang_x_mod-v0.11.0.zip": "364198930cc7f46ba5bb1c0987d089a557aa0b406f8efec0490744a454df00a5",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/net/org_golang_x_net-v0.10.0.zip": "f92f9b2655226a6d015af7a76279a11fb55678e410b851b158fc846546f80733",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/oauth2/org_golang_x_oauth2-v0.5.0.zip": "4d9d9ce8d2f55b7d2ffb3697ed811a54606e881bb7fbbd84d3d9c42fc404ae3c",
"https://storage.googleapis.com/cockroach-godeps/gomod/golang.org/x/perf/org_golang_x_perf-v0.0.0-20230113213139-801c7ef9e5c5.zip": "bc1b902e645fdd5d210b7db8f3280833af225b131dab5842d7a6d32a676f80f5",
Expand Down
4 changes: 2 additions & 2 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ require (
github.com/google/pprof v0.0.0-20210827144239-02619b876842
github.com/google/uuid v1.3.0
golang.org/x/crypto v0.9.0
golang.org/x/exp v0.0.0-20220827204233-334a2380cb91
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/mod v0.11.0 // indirect
golang.org/x/net v0.10.0
golang.org/x/oauth2 v0.5.0
golang.org/x/sync v0.1.0
Expand Down
8 changes: 4 additions & 4 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -2485,8 +2485,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20220827204233-334a2380cb91 h1:tnebWN09GYg9OLPss1KXj8txwZc6X6uMr6VFdcGNbHw=
golang.org/x/exp v0.0.0-20220827204233-334a2380cb91/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME=
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a h1:Jw5wfR+h9mnIYH+OtGT2im5wV1YGGDora5vTv/aa5bE=
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
Expand Down Expand Up @@ -2525,8 +2525,8 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.9.0 h1:KENHtAZL2y3NLMYZeHY9DW8HW8V+kQyJsY/V9JlKvCs=
golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.11.0 h1:bUO06HqtnRcc/7l71XBe4WcqTZ+3AH1J59zWDDwLKgU=
golang.org/x/mod v0.11.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180218175443-cbe0f9307d01/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
Expand Down
3 changes: 3 additions & 0 deletions pkg/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -661,6 +661,7 @@ ALL_TESTS = [
"//pkg/util/metric/aggmetric:aggmetric_test",
"//pkg/util/metric:metric_test",
"//pkg/util/mon:mon_test",
"//pkg/util/must:must_test",
"//pkg/util/netutil/addr:addr_test",
"//pkg/util/netutil:netutil_test",
"//pkg/util/optional:optional_test",
Expand Down Expand Up @@ -2337,6 +2338,8 @@ GO_TARGETS = [
"//pkg/util/metric:metric_test",
"//pkg/util/mon:mon",
"//pkg/util/mon:mon_test",
"//pkg/util/must:must",
"//pkg/util/must:must_test",
"//pkg/util/netutil/addr:addr",
"//pkg/util/netutil/addr:addr_test",
"//pkg/util/netutil:netutil",
Expand Down
4 changes: 3 additions & 1 deletion pkg/ccl/changefeedccl/changefeed_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -7702,8 +7702,10 @@ func TestChangefeedKafkaMessageTooLarge(t *testing.T) {

rnd, _ := randutil.NewPseudoRand()

maxFailures := int32(200)
var numFailures atomic.Int32
knobs.kafkaInterceptor = func(m *sarama.ProducerMessage, client kafkaClient) error {
if client.Config().Producer.Flush.MaxMessages > 1 && rnd.Int()%5 == 0 {
if client.Config().Producer.Flush.MaxMessages > 1 && numFailures.Add(1) < maxFailures && rnd.Int()%10 == 0 {
return sarama.ErrMessageSizeTooLarge
}
return nil
Expand Down
7 changes: 5 additions & 2 deletions pkg/ccl/streamingccl/replicationtestutils/testutils.go
Original file line number Diff line number Diff line change
Expand Up @@ -420,8 +420,11 @@ func requireReplicatedTime(targetTime hlc.Timestamp, progress *jobspb.Progress)
}

func CreateScatteredTable(t *testing.T, c *TenantStreamingClusters, numNodes int) {
// Create a source table with multiple ranges spread across multiple nodes
numRanges := 10
// Create a source table with multiple ranges spread across multiple nodes. We
// need around 50 or more ranges because there are already over 50 system
// ranges, so if we write just a few ranges those might all be on a single
// server, which will cause the test to flake.
numRanges := 50
rowsPerRange := 20
c.SrcTenantSQL.Exec(t, "CREATE TABLE d.scattered (key INT PRIMARY KEY)")
c.SrcTenantSQL.Exec(t, "INSERT INTO d.scattered (key) SELECT * FROM generate_series(1, $1)",
Expand Down
48 changes: 15 additions & 33 deletions pkg/ccl/streamingccl/streamingest/replication_stream_e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ import (
"testing"
"time"

"github.com/cockroachdb/cockroach/pkg/base"
"github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/replicationtestutils"
"github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/replicationutils"
_ "github.com/cockroachdb/cockroach/pkg/cloud/impl"
Expand Down Expand Up @@ -460,11 +459,12 @@ func TestTenantStreamingDropTenantCancelsStream(t *testing.T) {
})
}

// TestTenantStreamingUnavailableStreamAddress verifies that after a
// pause/resume (replan) we will not use a dead server as a source.
func TestTenantStreamingUnavailableStreamAddress(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

skip.WithIssue(t, 106865)
skip.UnderDeadlock(t, "multi-node may time out under deadlock")
skip.UnderRace(t, "takes too long with multiple nodes")

Expand Down Expand Up @@ -495,38 +495,23 @@ func TestTenantStreamingUnavailableStreamAddress(t *testing.T) {
streamAddresses := progress.GetStreamIngest().StreamAddresses
require.Greater(t, len(streamAddresses), 1)

destroyedAddress := c.SrcURL.String()
// Write something to the source cluster, note that the job is paused - and
// therefore not replicated for now.
c.SrcTenantSQL.Exec(t, "CREATE TABLE d.x (id INT PRIMARY KEY, n INT)")
c.SrcTenantSQL.Exec(t, `INSERT INTO d.x VALUES (3);`)

// Stop a server on the source cluster. Note that in this test we are trying
// to avoid using the source cluster after this point because if we do the
// test flakes, see #107499 for more info.
destroyedAddress := c.SrcURL.String()
require.NoError(t, c.SrcTenantConn.Close())
c.SrcTenantServer.Stopper().Stop(ctx)
c.SrcCluster.StopServer(0)

// Once SrcCluster.Server(0) is shut down queries must be ran against a different server
alternateSrcSysSQL := sqlutils.MakeSQLRunner(c.SrcCluster.ServerConn(1))
_, alternateSrcTenantConn := serverutils.StartTenant(t, c.SrcCluster.Server(1),
base.TestTenantArgs{
TenantID: c.Args.SrcTenantID,
TenantName: c.Args.SrcTenantName,
DisableCreateTenant: true,
})
defer alternateSrcTenantConn.Close()
alternateSrcTenantSQL := sqlutils.MakeSQLRunner(alternateSrcTenantConn)

alternateCompareResult := func(query string) {
sourceData := alternateSrcTenantSQL.QueryStr(c.T, query)
destData := c.DestTenantSQL.QueryStr(c.T, query)
require.Equal(c.T, sourceData, destData)
}

c.DestSysSQL.Exec(t, `RESUME JOB $1`, ingestionJobID)
jobutils.WaitForJobToRun(t, c.DestSysSQL, jobspb.JobID(ingestionJobID))

alternateSrcTenantSQL.Exec(t, "CREATE TABLE d.x (id INT PRIMARY KEY, n INT)")
alternateSrcTenantSQL.Exec(t, `INSERT INTO d.x VALUES (3);`)

var cutoverTime time.Time
alternateSrcSysSQL.QueryRow(t, "SELECT clock_timestamp()").Scan(&cutoverTime)

cutoverTime := c.SrcCluster.Server(1).Clock().Now().GoTime()
var cutoverStr string
c.DestSysSQL.QueryRow(c.T, `ALTER TENANT $1 COMPLETE REPLICATION TO SYSTEM TIME $2::string`,
c.Args.DestTenantName, cutoverTime).Scan(&cutoverStr)
Expand All @@ -539,18 +524,15 @@ func TestTenantStreamingUnavailableStreamAddress(t *testing.T) {
require.NoError(t, cleanUpTenant())
}()

// The destroyed address should have been removed from the topology
// The destroyed address should have been removed from the topology.
progress = jobutils.GetJobProgress(c.T, c.DestSysSQL, jobspb.JobID(ingestionJobID))
newStreamAddresses := progress.GetStreamIngest().StreamAddresses
require.Contains(t, streamAddresses, destroyedAddress)
require.NotContains(t, newStreamAddresses, destroyedAddress)

alternateCompareResult("SELECT * FROM d.t1")
alternateCompareResult("SELECT * FROM d.t2")
alternateCompareResult("SELECT * FROM d.x")

// We can't use alternateCompareResult because it'll try to contact the deceased
// n1 even if the lease holders for d.scattered have all moved to other nodes
// Verify the destination tenant is fully replicated.
destData := c.DestTenantSQL.QueryStr(c.T, "SELECT * FROM d.x")
require.Equal(c.T, [][]string{{"3", "NULL"}}, destData)
dstScatteredData := c.DestTenantSQL.QueryStr(c.T, "SELECT * FROM d.scattered ORDER BY key")
require.Equal(t, srcScatteredData, dstScatteredData)
}
Expand Down
1 change: 1 addition & 0 deletions pkg/cli/testdata/zip/partial1
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ debug zip --concurrency=1 --cpu-profile-duration=0s /dev/null
[cluster] retrieving SQL data for crdb_internal.kv_store_status... writing output: debug/crdb_internal.kv_store_status.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_system_privileges... writing output: debug/crdb_internal.kv_system_privileges.txt... done
[cluster] retrieving SQL data for crdb_internal.partitions... writing output: debug/crdb_internal.partitions.txt... done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100... writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt... done
[cluster] retrieving SQL data for crdb_internal.regions... writing output: debug/crdb_internal.regions.txt... done
[cluster] retrieving SQL data for crdb_internal.schema_changes... writing output: debug/crdb_internal.schema_changes.txt... done
[cluster] retrieving SQL data for crdb_internal.super_regions... writing output: debug/crdb_internal.super_regions.txt... done
Expand Down
1 change: 1 addition & 0 deletions pkg/cli/testdata/zip/partial1_excluded
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ debug zip /dev/null --concurrency=1 --exclude-nodes=2 --cpu-profile-duration=0
[cluster] retrieving SQL data for crdb_internal.kv_store_status... writing output: debug/crdb_internal.kv_store_status.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_system_privileges... writing output: debug/crdb_internal.kv_system_privileges.txt... done
[cluster] retrieving SQL data for crdb_internal.partitions... writing output: debug/crdb_internal.partitions.txt... done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100... writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt... done
[cluster] retrieving SQL data for crdb_internal.regions... writing output: debug/crdb_internal.regions.txt... done
[cluster] retrieving SQL data for crdb_internal.schema_changes... writing output: debug/crdb_internal.schema_changes.txt... done
[cluster] retrieving SQL data for crdb_internal.super_regions... writing output: debug/crdb_internal.super_regions.txt... done
Expand Down
1 change: 1 addition & 0 deletions pkg/cli/testdata/zip/partial2
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ debug zip --concurrency=1 --cpu-profile-duration=0 /dev/null
[cluster] retrieving SQL data for crdb_internal.kv_store_status... writing output: debug/crdb_internal.kv_store_status.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_system_privileges... writing output: debug/crdb_internal.kv_system_privileges.txt... done
[cluster] retrieving SQL data for crdb_internal.partitions... writing output: debug/crdb_internal.partitions.txt... done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100... writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt... done
[cluster] retrieving SQL data for crdb_internal.regions... writing output: debug/crdb_internal.regions.txt... done
[cluster] retrieving SQL data for crdb_internal.schema_changes... writing output: debug/crdb_internal.schema_changes.txt... done
[cluster] retrieving SQL data for crdb_internal.super_regions... writing output: debug/crdb_internal.super_regions.txt... done
Expand Down
1 change: 1 addition & 0 deletions pkg/cli/testdata/zip/testzip
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ debug zip --concurrency=1 --cpu-profile-duration=1s /dev/null
[cluster] retrieving SQL data for crdb_internal.kv_store_status... writing output: debug/crdb_internal.kv_store_status.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_system_privileges... writing output: debug/crdb_internal.kv_system_privileges.txt... done
[cluster] retrieving SQL data for crdb_internal.partitions... writing output: debug/crdb_internal.partitions.txt... done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100... writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt... done
[cluster] retrieving SQL data for crdb_internal.regions... writing output: debug/crdb_internal.regions.txt... done
[cluster] retrieving SQL data for crdb_internal.schema_changes... writing output: debug/crdb_internal.schema_changes.txt... done
[cluster] retrieving SQL data for crdb_internal.super_regions... writing output: debug/crdb_internal.super_regions.txt... done
Expand Down
3 changes: 3 additions & 0 deletions pkg/cli/testdata/zip/testzip_concurrent
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,9 @@ zip
[cluster] retrieving SQL data for crdb_internal.partitions...
[cluster] retrieving SQL data for crdb_internal.partitions: done
[cluster] retrieving SQL data for crdb_internal.partitions: writing output: debug/crdb_internal.partitions.txt...
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100...
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100: done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100: writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt...
[cluster] retrieving SQL data for crdb_internal.regions...
[cluster] retrieving SQL data for crdb_internal.regions: done
[cluster] retrieving SQL data for crdb_internal.regions: writing output: debug/crdb_internal.regions.txt...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ debug zip --concurrency=1 --cpu-profile-duration=1s /dev/null
[cluster] retrieving SQL data for crdb_internal.kv_store_status: creating error output: debug/crdb_internal.kv_store_status.txt.err.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_system_privileges... writing output: debug/crdb_internal.kv_system_privileges.txt... done
[cluster] retrieving SQL data for crdb_internal.partitions... writing output: debug/crdb_internal.partitions.txt... done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100... writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt... done
[cluster] retrieving SQL data for crdb_internal.regions... writing output: debug/crdb_internal.regions.txt... done
[cluster] retrieving SQL data for crdb_internal.schema_changes... writing output: debug/crdb_internal.schema_changes.txt... done
[cluster] retrieving SQL data for crdb_internal.super_regions... writing output: debug/crdb_internal.super_regions.txt... done
Expand Down
1 change: 1 addition & 0 deletions pkg/cli/testdata/zip/testzip_include_range_info
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ debug zip --concurrency=1 --cpu-profile-duration=1s --include-range-info /dev/nu
[cluster] retrieving SQL data for crdb_internal.kv_store_status... writing output: debug/crdb_internal.kv_store_status.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_system_privileges... writing output: debug/crdb_internal.kv_system_privileges.txt... done
[cluster] retrieving SQL data for crdb_internal.partitions... writing output: debug/crdb_internal.partitions.txt... done
[cluster] retrieving SQL data for crdb_internal.probe_ranges_1s_write_limit_100... writing output: debug/crdb_internal.probe_ranges_1s_write_limit_100.txt... done
[cluster] retrieving SQL data for crdb_internal.regions... writing output: debug/crdb_internal.regions.txt... done
[cluster] retrieving SQL data for crdb_internal.schema_changes... writing output: debug/crdb_internal.schema_changes.txt... done
[cluster] retrieving SQL data for crdb_internal.super_regions... writing output: debug/crdb_internal.super_regions.txt... done
Expand Down
Loading

0 comments on commit 04c91a5

Please sign in to comment.