Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: TestClosedTimestampCanServe failed under stress [skipped] #28607

Closed
cockroach-teamcity opened this issue Aug 15, 2018 · 15 comments
Closed
Assignees
Labels
A-kv-client Relating to the KV client and the KV interface. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/5b184df62c5b21d82c2b6f03623ebd4ec5f92629

Parameters:

TAGS=
GOFLAGS=-race

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=837548&tab=buildLog

I180815 05:59:16.983721 119899 storage/replica_raftstorage.go:521  [replicate,n1,s1,r16/1:/Table/{19-20}] generated preemptive snapshot b14d2ab0 at index 20
I180815 05:59:17.060421 119899 storage/store_snapshot.go:641  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 10, rate-limit: 8.0 MiB/sec, 53ms
I180815 05:59:17.063931 119927 storage/replica_raftstorage.go:758  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 20 (id=b14d2ab0, encoded size=3615, 1 rocksdb batches, 10 log entries)
I180815 05:59:17.073782 119927 storage/replica_raftstorage.go:764  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=5ms commit=3ms]
I180815 05:59:17.101153 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 15 underreplicated ranges
I180815 05:59:17.205416 119899 storage/replica_command.go:759  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, (n3,s3):2, next=3, gen=842362665000]
I180815 05:59:17.421989 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 15 underreplicated ranges
I180815 05:59:17.555801 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 15 underreplicated ranges
I180815 05:59:17.674118 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 15 underreplicated ranges
I180815 05:59:17.698792 119899 storage/replica.go:3724  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:17.874855 119946 storage/replica_raftstorage.go:521  [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot a09bbf60 at index 22
I180815 05:59:17.901689 119946 storage/store_snapshot.go:641  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 18ms
I180815 05:59:17.919171 119976 storage/replica_raftstorage.go:758  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 22 (id=a09bbf60, encoded size=3724, 1 rocksdb batches, 12 log entries)
I180815 05:59:17.976925 119976 storage/replica_raftstorage.go:764  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 57ms [clear=0ms batch=0ms entries=56ms commit=0ms]
I180815 05:59:17.985741 119946 storage/replica_command.go:759  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n2,s2):2, next=3, gen=842358679040]
I180815 05:59:17.992922 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 14 underreplicated ranges
I180815 05:59:18.128537 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 14 underreplicated ranges
I180815 05:59:18.277415 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 14 underreplicated ranges
I180815 05:59:18.397431 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 14 underreplicated ranges
I180815 05:59:18.525227 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 14 underreplicated ranges
I180815 05:59:18.567961 119946 storage/replica.go:3724  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:18.620080 119883 storage/replica_raftstorage.go:521  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] generated preemptive snapshot 508a0e02 at index 29
I180815 05:59:18.803362 119883 storage/store_snapshot.go:641  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 19, rate-limit: 8.0 MiB/sec, 106ms
I180815 05:59:18.817260 119949 storage/replica_raftstorage.go:758  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 29 (id=508a0e02, encoded size=61412, 1 rocksdb batches, 19 log entries)
I180815 05:59:18.855681 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 13 underreplicated ranges
I180815 05:59:18.892516 119949 storage/replica_raftstorage.go:764  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 75ms [clear=18ms batch=0ms entries=55ms commit=1ms]
I180815 05:59:18.902897 119883 storage/replica_command.go:759  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, (n3,s3):2, next=3, gen=842367815536]
I180815 05:59:18.981010 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 13 underreplicated ranges
I180815 05:59:19.214883 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 13 underreplicated ranges
I180815 05:59:19.301279 119883 storage/replica.go:3724  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:19.345116 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 13 underreplicated ranges
I180815 05:59:19.379864 119932 storage/replica_raftstorage.go:521  [replicate,n1,s1,r21/1:/Table/5{0-1}] generated preemptive snapshot d9e3644e at index 22
I180815 05:59:19.388354 119932 storage/store_snapshot.go:641  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 5ms
I180815 05:59:19.391603 119730 storage/replica_raftstorage.go:758  [n2,s2,r21/?:{-}] applying preemptive snapshot at index 22 (id=d9e3644e, encoded size=3697, 1 rocksdb batches, 12 log entries)
I180815 05:59:19.397628 119730 storage/replica_raftstorage.go:764  [n2,s2,r21/?:/Table/5{0-1}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I180815 05:59:19.445224 119932 storage/replica_command.go:759  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, (n3,s3):2, next=3, gen=842353972776]
I180815 05:59:19.466452 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 12 underreplicated ranges
I180815 05:59:19.627035 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 12 underreplicated ranges
I180815 05:59:19.732244 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 12 underreplicated ranges
I180815 05:59:19.742767 119932 storage/replica.go:3724  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:19.875273 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 11 underreplicated ranges
I180815 05:59:19.892612 119935 storage/replica_raftstorage.go:521  [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot e5f10b80 at index 112
I180815 05:59:19.902326 119935 storage/store_snapshot.go:641  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 302, log entries: 14, rate-limit: 8.0 MiB/sec, 8ms
I180815 05:59:19.975439 119996 storage/replica_raftstorage.go:758  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 112 (id=e5f10b80, encoded size=40556, 1 rocksdb batches, 14 log entries)
I180815 05:59:19.988335 119996 storage/replica_raftstorage.go:764  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=11ms commit=1ms]
I180815 05:59:20.020520 119935 storage/replica_command.go:759  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, (n2,s2):2, next=3, gen=842367602056]
I180815 05:59:20.115205 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 11 underreplicated ranges
I180815 05:59:20.243097 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 11 underreplicated ranges
I180815 05:59:20.424941 119935 storage/replica.go:3724  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:20.432596 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 11 underreplicated ranges
I180815 05:59:20.545829 120010 storage/replica_raftstorage.go:521  [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 418caf09 at index 144
I180815 05:59:20.620929 120010 storage/store_snapshot.go:641  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 89, log entries: 134, rate-limit: 8.0 MiB/sec, 51ms
I180815 05:59:20.630135 120038 storage/replica_raftstorage.go:758  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 144 (id=418caf09, encoded size=49059, 1 rocksdb batches, 134 log entries)
I180815 05:59:20.638024 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:20.744781 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:20.867465 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:20.983089 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:21.054870 120038 storage/replica_raftstorage.go:764  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 424ms [clear=0ms batch=0ms entries=397ms commit=27ms]
I180815 05:59:21.060885 120010 storage/replica_command.go:759  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n3,s3):2, next=3, gen=842393130736]
I180815 05:59:21.087504 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:21.187100 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:21.322460 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:21.404864 120010 storage/replica.go:3724  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:21.430089 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180815 05:59:21.515314 120028 storage/replica_raftstorage.go:521  [replicate,n1,s1,r18/1:/Table/2{1-2}] generated preemptive snapshot b776c4d5 at index 23
I180815 05:59:21.537877 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 9 underreplicated ranges
I180815 05:59:21.579264 120028 storage/store_snapshot.go:641  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 13, rate-limit: 8.0 MiB/sec, 22ms
I180815 05:59:21.610280 120042 storage/replica_raftstorage.go:758  [n2,s2,r18/?:{-}] applying preemptive snapshot at index 23 (id=b776c4d5, encoded size=4095, 1 rocksdb batches, 13 log entries)
I180815 05:59:21.630117 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 9 underreplicated ranges
I180815 05:59:21.682125 120042 storage/replica_raftstorage.go:764  [n2,s2,r18/?:/Table/2{1-2}] applied preemptive snapshot in 56ms [clear=1ms batch=0ms entries=35ms commit=0ms]
I180815 05:59:21.697898 120028 storage/replica_command.go:759  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, (n3,s3):2, next=3, gen=842353974424]
I180815 05:59:21.744167 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 9 underreplicated ranges
I180815 05:59:21.861023 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 9 underreplicated ranges
I180815 05:59:21.955227 120028 storage/replica.go:3724  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:21.986283 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 9 underreplicated ranges
I180815 05:59:22.023174 120053 storage/replica_raftstorage.go:521  [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot 2dea0383 at index 20
I180815 05:59:22.031704 120053 storage/store_snapshot.go:641  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 10, rate-limit: 8.0 MiB/sec, 7ms
I180815 05:59:22.071593 120032 storage/replica_raftstorage.go:758  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 20 (id=2dea0383, encoded size=3444, 1 rocksdb batches, 10 log entries)
I180815 05:59:22.077028 120032 storage/replica_raftstorage.go:764  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I180815 05:59:22.083682 120053 storage/replica_command.go:759  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, (n2,s2):2, next=3, gen=842354614192]
I180815 05:59:22.096078 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 8 underreplicated ranges
I180815 05:59:22.287069 119037 server/status/runtime.go:424  [n2] runtime stats: 1.3 GiB RSS, 618 goroutines, 48 MiB/55 MiB/125 MiB GO alloc/idle/total, 92 MiB/138 MiB CGO alloc/total, 895.90cgo/sec, 0.81/0.05 %(u/s)time, 0.02 %gc (5x)
I180815 05:59:22.336664 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 8 underreplicated ranges
I180815 05:59:22.415806 120053 storage/replica.go:3724  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:22.560764 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 7 underreplicated ranges
I180815 05:59:22.648512 120088 storage/replica_raftstorage.go:521  [replicate,n1,s1,r19/1:/Table/2{2-3}] generated preemptive snapshot 5ed7199b at index 22
I180815 05:59:22.666113 120088 storage/store_snapshot.go:641  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 12ms
I180815 05:59:22.670031 120046 storage/replica_raftstorage.go:758  [n2,s2,r19/?:{-}] applying preemptive snapshot at index 22 (id=5ed7199b, encoded size=3724, 1 rocksdb batches, 12 log entries)
I180815 05:59:22.717988 120046 storage/replica_raftstorage.go:764  [n2,s2,r19/?:/Table/2{2-3}] applied preemptive snapshot in 46ms [clear=0ms batch=0ms entries=39ms commit=4ms]
I180815 05:59:22.782094 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 7 underreplicated ranges
I180815 05:59:22.810981 120088 storage/replica_command.go:759  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, (n3,s3):2, next=3, gen=842354928856]
I180815 05:59:22.850925 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:22.933095 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 7 underreplicated ranges
I180815 05:59:23.030125 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 7 underreplicated ranges
I180815 05:59:23.139799 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 7 underreplicated ranges
I180815 05:59:23.250060 120088 storage/replica.go:3724  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:23.318507 120060 storage/replica_raftstorage.go:521  [replicate,n1,s1,r22/1:/{Table/51-Max}] generated preemptive snapshot 082c2144 at index 18
I180815 05:59:23.326315 120060 storage/store_snapshot.go:641  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 8, rate-limit: 8.0 MiB/sec, 7ms
I180815 05:59:23.330244 120064 storage/replica_raftstorage.go:758  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 18 (id=082c2144, encoded size=1935, 1 rocksdb batches, 8 log entries)
I180815 05:59:23.340590 120064 storage/replica_raftstorage.go:764  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 10ms [clear=0ms batch=0ms entries=9ms commit=0ms]
I180815 05:59:23.345861 120060 storage/replica_command.go:759  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I180815 05:59:23.427516 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 6 underreplicated ranges
I180815 05:59:23.562902 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 6 underreplicated ranges
I180815 05:59:23.704594 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 6 underreplicated ranges
I180815 05:59:23.720957 120060 storage/replica.go:3724  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:23.792452 120115 storage/replica_raftstorage.go:521  [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot d1ad7918 at index 22
I180815 05:59:23.833995 120115 storage/store_snapshot.go:641  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 40ms
I180815 05:59:23.840501 120118 storage/replica_raftstorage.go:758  [n3,s3,r15/?:{-}] applying preemptive snapshot at index 22 (id=d1ad7918, encoded size=3892, 1 rocksdb batches, 12 log entries)
I180815 05:59:23.844839 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 5 underreplicated ranges
I180815 05:59:23.907595 120118 storage/replica_raftstorage.go:764  [n3,s3,r15/?:/Table/1{8-9}] applied preemptive snapshot in 49ms [clear=0ms batch=0ms entries=39ms commit=0ms]
I180815 05:59:23.944310 120115 storage/replica_command.go:759  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, (n2,s2):2, next=3, gen=842354606512]
I180815 05:59:23.990458 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 5 underreplicated ranges
I180815 05:59:24.069821 119417 server/status/runtime.go:424  [n3] runtime stats: 1.3 GiB RSS, 624 goroutines, 45 MiB/58 MiB/125 MiB GO alloc/idle/total, 93 MiB/138 MiB CGO alloc/total, 838.74cgo/sec, 0.79/0.05 %(u/s)time, 0.01 %gc (5x)
I180815 05:59:24.104763 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 5 underreplicated ranges
I180815 05:59:24.207822 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 5 underreplicated ranges
I180815 05:59:24.287357 120115 storage/replica.go:3724  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:24.457116 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:24.475833 120137 storage/replica_raftstorage.go:521  [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot f537aa8d at index 38
I180815 05:59:24.496092 118463 server/status/runtime.go:424  [n1] runtime stats: 1.3 GiB RSS, 618 goroutines, 50 MiB/54 MiB/125 MiB GO alloc/idle/total, 93 MiB/138 MiB CGO alloc/total, 810.17cgo/sec, 0.77/0.05 %(u/s)time, 0.01 %gc (5x)
I180815 05:59:24.508021 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:24.674950 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:24.801480 120137 storage/store_snapshot.go:641  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1382, log entries: 3, rate-limit: 8.0 MiB/sec, 306ms
I180815 05:59:24.811494 120149 storage/replica_raftstorage.go:758  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 38 (id=f537aa8d, encoded size=265968, 1 rocksdb batches, 3 log entries)
I180815 05:59:24.824439 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:24.849735 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:24.956837 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:24.987518 120149 storage/replica_raftstorage.go:764  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 176ms [clear=0ms batch=1ms entries=122ms commit=2ms]
I180815 05:59:25.129398 120137 storage/replica_command.go:759  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n2,s2):2, next=3, gen=842380698072]
I180815 05:59:25.143727 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:25.306169 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:25.418723 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 4 underreplicated ranges
I180815 05:59:25.428396 118309 gossip/gossip.go:1365  [n1] node has connected to cluster via gossip
I180815 05:59:25.429774 118309 storage/stores.go:241  [n1] wrote 2 node addresses to persistent storage
I180815 05:59:25.467872 120137 storage/replica.go:3724  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:25.488166 120152 storage/replica_raftstorage.go:521  [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot cce83600 at index 22
I180815 05:59:25.494671 120152 storage/store_snapshot.go:641  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n3,s3):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 5ms
I180815 05:59:25.498134 120182 storage/replica_raftstorage.go:758  [n3,s3,r13/?:{-}] applying preemptive snapshot at index 22 (id=cce83600, encoded size=3721, 1 rocksdb batches, 12 log entries)
I180815 05:59:25.519647 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 3 underreplicated ranges
I180815 05:59:25.522796 120182 storage/replica_raftstorage.go:764  [n3,s3,r13/?:/Table/1{6-7}] applied preemptive snapshot in 24ms [clear=0ms batch=0ms entries=23ms commit=0ms]
I180815 05:59:25.528348 120152 storage/replica_command.go:759  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n2,s2):2, next=3, gen=842353761000]
I180815 05:59:25.625312 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 3 underreplicated ranges
I180815 05:59:25.844316 120152 storage/replica.go:3724  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:25.866771 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 3 underreplicated ranges
I180815 05:59:25.973071 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 2 underreplicated ranges
I180815 05:59:26.054646 120127 storage/replica_raftstorage.go:521  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] generated preemptive snapshot 51be7d37 at index 27
I180815 05:59:26.102030 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 2 underreplicated ranges
I180815 05:59:26.143076 120127 storage/store_snapshot.go:641  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 17, rate-limit: 8.0 MiB/sec, 42ms
I180815 05:59:26.199001 120130 storage/replica_raftstorage.go:758  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 27 (id=51be7d37, encoded size=5950, 1 rocksdb batches, 17 log entries)
I180815 05:59:26.238332 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 2 underreplicated ranges
I180815 05:59:26.333099 120130 storage/replica_raftstorage.go:764  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 134ms [clear=0ms batch=0ms entries=132ms commit=0ms]
I180815 05:59:26.489811 120127 storage/replica_command.go:759  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=842354180568]
I180815 05:59:26.517620 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 2 underreplicated ranges
W180815 05:59:26.528392 118465 server/node.go:886  [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:2}]}
I180815 05:59:26.763174 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 2 underreplicated ranges
I180815 05:59:26.869725 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 2 underreplicated ranges
I180815 05:59:26.954504 120127 storage/replica.go:3724  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180815 05:59:27.097286 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 1 underreplicated ranges
I180815 05:59:27.157512 118931 gossip/gossip.go:1365  [n2] node has connected to cluster via gossip
I180815 05:59:27.158763 118931 storage/stores.go:241  [n2] wrote 2 node addresses to persistent storage
I180815 05:59:27.160698 120217 storage/replica_raftstorage.go:521  [replicate,n1,s1,r20/1:/Table/{23-50}] generated preemptive snapshot a2824dda at index 22
I180815 05:59:27.192897 120217 storage/store_snapshot.go:641  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 15, log entries: 12, rate-limit: 8.0 MiB/sec, 31ms
I180815 05:59:27.200619 120243 storage/replica_raftstorage.go:758  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 22 (id=a2824dda, encoded size=3852, 1 rocksdb batches, 12 log entries)
I180815 05:59:27.237813 120243 storage/replica_raftstorage.go:764  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 37ms [clear=0ms batch=0ms entries=20ms commit=0ms]
I180815 05:59:27.317018 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 1 underreplicated ranges
I180815 05:59:27.328148 120217 storage/replica_command.go:759  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=842378765000]
I180815 05:59:27.452587 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 1 underreplicated ranges
I180815 05:59:27.648821 118204 testutils/testcluster/testcluster.go:536  [n1,s1] has 1 underreplicated ranges
I180815 05:59:27.693641 120217 storage/replica.go:3724  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180815 05:59:27.867772 120177 storage/replica_range_lease.go:554  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] transferring lease to s3
I180815 05:59:27.999696 120177 storage/replica_range_lease.go:617  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] done transferring lease to s3: <nil>
I180815 05:59:28.107885 119394 storage/replica_proposal.go:214  [n3,s3,r6/2:/{System/tse-Table/System…}] new range lease repl=(n3,s3):2 seq=4 start=1534312767.867972868,0 epo=1 pro=1534312767.867994497,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.403557297,0
I180815 05:59:29.093898 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.closed_timestamp.target_duration Value:300ms User:root}
I180815 05:59:29.844715 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.closed_timestamp.close_fraction Value:0.03333333333333333 User:root}
I180815 05:59:30.062135 120222 storage/replica_range_lease.go:554  [replicate,n1,s1,r21/1:/Table/5{0-1}] transferring lease to s2
I180815 05:59:30.076604 120222 storage/replica_range_lease.go:617  [replicate,n1,s1,r21/1:/Table/5{0-1}] done transferring lease to s2: <nil>
I180815 05:59:30.084312 119013 storage/replica_proposal.go:214  [n2,s2,r21/3:/Table/5{0-1}] new range lease repl=(n2,s2):3 seq=4 start=1534312770.062361503,0 epo=1 pro=1534312770.062410281,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.751586201,0
I180815 05:59:30.918491 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.closed_timestamp.follower_reads_enabled Value:true User:root}
I180815 05:59:31.435671 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_database", target: 52, info: {DatabaseName:cttest Statement:CREATE DATABASE cttest User:root}
I180815 05:59:31.612528 120314 storage/replica_command.go:280  [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180815 05:59:32.087808 120357 storage/replica_range_lease.go:554  [replicate,n1,s1,r8/1:/Table/1{1-2}] transferring lease to s2
I180815 05:59:32.113402 120357 storage/replica_range_lease.go:617  [replicate,n1,s1,r8/1:/Table/1{1-2}] done transferring lease to s2: <nil>
I180815 05:59:32.270379 118968 storage/replica_proposal.go:214  [n2,s2,r8/2:/Table/1{1-2}] new range lease repl=(n2,s2):2 seq=4 start=1534312772.088084172,0 epo=1 pro=1534312772.088130492,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.453077303,0
I180815 05:59:32.295164 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 632 goroutines, 48 MiB/55 MiB/125 MiB GO alloc/idle/total, 96 MiB/142 MiB CGO alloc/total, 723.72cgo/sec, 0.86/0.05 %(u/s)time, 0.03 %gc (7x)
I180815 05:59:32.621313 118401 storage/replica_proposal.go:214  [n1,s1,r22/1:/{Table/51-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 05:59:32.886654 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:34.239323 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 629 goroutines, 66 MiB/39 MiB/125 MiB GO alloc/idle/total, 96 MiB/142 MiB CGO alloc/total, 689.51cgo/sec, 0.91/0.06 %(u/s)time, 0.03 %gc (8x)
I180815 05:59:34.247180 120305 storage/replica_range_lease.go:554  [replicate,n1,s1,r10/1:/Table/1{3-4}] transferring lease to s3
I180815 05:59:34.510883 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:34.549340 119369 storage/replica_proposal.go:214  [n3,s3,r10/3:/Table/1{3-4}] new range lease repl=(n3,s3):3 seq=4 start=1534312774.247422289,0 epo=1 pro=1534312774.247513674,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.281211695,0
I180815 05:59:34.627155 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 633 goroutines, 44 MiB/60 MiB/125 MiB GO alloc/idle/total, 96 MiB/142 MiB CGO alloc/total, 686.91cgo/sec, 0.93/0.06 %(u/s)time, 0.03 %gc (8x)
I180815 05:59:34.763550 120305 storage/replica_range_lease.go:617  [replicate,n1,s1,r10/1:/Table/1{3-4}] done transferring lease to s3: <nil>
I180815 05:59:34.860399 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:36.355180 120441 storage/replica_range_lease.go:554  [replicate,n1,s1,r20/1:/Table/{23-50}] transferring lease to s2
I180815 05:59:36.893561 120441 storage/replica_range_lease.go:617  [replicate,n1,s1,r20/1:/Table/{23-50}] done transferring lease to s2: <nil>
I180815 05:59:36.908935 119004 storage/replica_proposal.go:214  [n2,s2,r20/2:/Table/{23-50}] new range lease repl=(n2,s2):2 seq=4 start=1534312776.355426231,0 epo=1 pro=1534312776.355464292,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.414919488,0
I180815 05:59:38.547280 120431 storage/replica_range_lease.go:554  [replicate,n1,s1,r15/1:/Table/1{8-9}] transferring lease to s3
I180815 05:59:38.946752 119329 storage/replica_proposal.go:214  [n3,s3,r15/3:/Table/1{8-9}] new range lease repl=(n3,s3):3 seq=4 start=1534312778.547516757,0 epo=1 pro=1534312778.547556988,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.375691907,0
I180815 05:59:38.948329 120431 storage/replica_range_lease.go:617  [replicate,n1,s1,r15/1:/Table/1{8-9}] done transferring lease to s3: <nil>
I180815 05:59:40.782584 119208 gossip/gossip.go:1365  [n3] node has connected to cluster via gossip
I180815 05:59:40.948260 119208 storage/stores.go:241  [n3] wrote 2 node addresses to persistent storage
I180815 05:59:41.558758 120464 storage/replica_range_lease.go:554  [replicate,n1,s1,r22/1:/Table/5{1-2}] transferring lease to s3
I180815 05:59:41.612816 120464 storage/replica_range_lease.go:617  [replicate,n1,s1,r22/1:/Table/5{1-2}] done transferring lease to s3: <nil>
I180815 05:59:41.644640 119384 storage/replica_proposal.go:214  [n3,s3,r22/3:/Table/5{1-2}] new range lease repl=(n3,s3):3 seq=4 start=1534312781.559053676,0 epo=1 pro=1534312781.559094261,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 05:59:42.303645 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 628 goroutines, 37 MiB/66 MiB/125 MiB GO alloc/idle/total, 98 MiB/145 MiB CGO alloc/total, 400.06cgo/sec, 0.97/0.06 %(u/s)time, 0.06 %gc (9x)
I180815 05:59:42.886774 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:43.589067 120552 storage/replica_range_lease.go:554  [replicate,n1,s1,r1/1:/{Min-System/}] transferring lease to s3
I180815 05:59:43.683892 120552 storage/replica_range_lease.go:617  [replicate,n1,s1,r1/1:/{Min-System/}] done transferring lease to s3: <nil>
I180815 05:59:44.112289 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 628 goroutines, 62 MiB/42 MiB/125 MiB GO alloc/idle/total, 100 MiB/146 MiB CGO alloc/total, 350.65cgo/sec, 0.96/0.06 %(u/s)time, 0.07 %gc (8x)
I180815 05:59:44.575129 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 629 goroutines, 39 MiB/64 MiB/125 MiB GO alloc/idle/total, 100 MiB/146 MiB CGO alloc/total, 339.76cgo/sec, 0.95/0.05 %(u/s)time, 0.07 %gc (9x)
I180815 05:59:44.607038 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:44.773175 120539 storage/replica_range_lease.go:554  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] transferring lease to s3
I180815 05:59:44.874424 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:45.326423 120539 storage/replica_range_lease.go:617  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] done transferring lease to s3: <nil>
I180815 05:59:48.013679 119062 storage/node_liveness.go:817  [n2,hb] retrying liveness update after storage.errRetryLiveness: TransactionStatusError: could not commit in one phase as requested (REASON_UNKNOWN)
W180815 05:59:48.588051 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 1.3s
I180815 05:59:49.558853 119426 storage/node_liveness.go:817  [n3,hb] retrying liveness update after storage.errRetryLiveness: TransactionStatusError: could not commit in one phase as requested (REASON_UNKNOWN)
I180815 05:59:50.285561 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 53, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.308181 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 54, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.313813 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 55, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.315914 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 56, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316013 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 57, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316086 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 58, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316157 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 59, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316222 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 60, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316295 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 61, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316363 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 62, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316455 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 63, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316557 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 64, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316635 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 65, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316701 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 66, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.316777 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 67, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.322328 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 68, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.322471 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 69, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.322571 120112 sql/event_log.go:126  [n1,client=127.0.0.1:53434,user=root] Event: "create_table", target: 70, info: {TableName:cttest.public.kv Statement:CREATE TABLE cttest.public.kv (id INT PRIMARY KEY, value STRING) User:root}
I180815 05:59:50.600707 120620 storage/replica_command.go:280  [split,n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
I180815 05:59:52.306212 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 630 goroutines, 46 MiB/57 MiB/125 MiB GO alloc/idle/total, 100 MiB/146 MiB CGO alloc/total, 330.02cgo/sec, 0.94/0.04 %(u/s)time, 0.05 %gc (8x)
I180815 05:59:52.629136 118421 storage/replica_proposal.go:214  [n1,s1,r23/1:/{Table/52-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 05:59:52.845014 120723 storage/replica_command.go:280  [split,n1,s1,r24/1:/{Table/53-Max}] initiating a split of this range at key /Table/54 [r25]
I180815 05:59:52.853208 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:54.045423 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 633 goroutines, 59 MiB/44 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 379.94cgo/sec, 0.95/0.04 %(u/s)time, 0.05 %gc (8x)
I180815 05:59:54.478989 118460 gossip/gossip.go:538  [n1] gossip status (ok, 3 nodes)
gossip client (0/3 cur/max conns)
gossip server (2/3 cur/max conns, infos 779/45 sent/received, bytes 346076B/13056B sent/received)
  2: 127.0.0.1:43007 (52s)
  3: 127.0.0.1:45497 (51s)
I180815 05:59:54.583487 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 632 goroutines, 52 MiB/52 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 389.88cgo/sec, 0.94/0.04 %(u/s)time, 0.05 %gc (8x)
I180815 05:59:54.596275 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:54.851508 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 05:59:55.123709 118436 storage/replica_proposal.go:214  [n1,s1,r24/1:/{Table/53-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 05:59:55.229158 120747 storage/replica_command.go:280  [split,n1,s1,r25/1:/{Table/54-Max}] initiating a split of this range at key /Table/55 [r26]
I180815 05:59:55.377765 120748 storage/replica_range_lease.go:554  [replicate,n1,s1,r24/1:/Table/5{3-4}] transferring lease to s3
I180815 05:59:55.644439 119399 storage/replica_proposal.go:214  [n3,s3,r24/3:/Table/5{3-4}] new range lease repl=(n3,s3):3 seq=4 start=1534312795.404076597,0 epo=1 pro=1534312795.404161381,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 05:59:55.648441 120748 storage/replica_range_lease.go:617  [replicate,n1,s1,r24/1:/Table/5{3-4}] done transferring lease to s3: <nil>
I180815 05:59:57.741885 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
I180815 05:59:58.226185 120705 storage/replica_range_lease.go:554  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] transferring lease to s2
W180815 05:59:58.368600 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 2.1s
I180815 05:59:58.451461 118504 storage/node_liveness.go:817  [n1,hb] retrying liveness update after storage.errRetryLiveness: TransactionStatusError: could not commit in one phase as requested (REASON_UNKNOWN)
I180815 05:59:59.828199 120705 storage/replica_range_lease.go:617  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] done transferring lease to s2: <nil>
W180815 06:00:01.191552 119426 storage/node_liveness.go:558  [n3,hb] slow heartbeat took 3.1s
W180815 06:00:01.715099 118504 storage/node_liveness.go:558  [n1,hb] slow heartbeat took 4.2s
W180815 06:00:02.247358 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 1.4s
I180815 06:00:02.267039 119034 gossip/gossip.go:538  [n2] gossip status (ok, 3 nodes)
gossip client (1/3 cur/max conns)
  1: 127.0.0.1:35975 (1m0s: infos 19/523 sent/received, bytes 6256B/184112B sent/received)
gossip server (0/3 cur/max conns, infos 19/523 sent/received, bytes 6256B/184112B sent/received)
I180815 06:00:02.344448 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 632 goroutines, 49 MiB/54 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 319.38cgo/sec, 0.92/0.05 %(u/s)time, 0.04 %gc (8x)
I180815 06:00:02.392446 118408 storage/replica_proposal.go:214  [n1,s1,r25/1:/{Table/54-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:02.453230 120841 storage/replica_command.go:280  [split,n1,s1,r26/1:/{Table/55-Max}] initiating a split of this range at key /Table/56 [r27]
I180815 06:00:02.933053 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:04.072407 119414 gossip/gossip.go:538  [n3] gossip status (ok, 3 nodes)
gossip client (1/3 cur/max conns)
  1: 127.0.0.1:35975 (1m1s: infos 48/311 sent/received, bytes 11681B/176197B sent/received)
gossip server (0/3 cur/max conns, infos 48/311 sent/received, bytes 11681B/176197B sent/received)
I180815 06:00:04.251609 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 640 goroutines, 61 MiB/44 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 302.66cgo/sec, 0.92/0.06 %(u/s)time, 0.04 %gc (9x)
I180815 06:00:04.507573 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 644 goroutines, 41 MiB/62 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 300.48cgo/sec, 0.93/0.06 %(u/s)time, 0.04 %gc (8x)
I180815 06:00:04.568482 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:04.864355 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
W180815 06:00:05.554442 118418 storage/store.go:3747  [n1,s1] handle raft ready: 0.6s [processed=0]
I180815 06:00:05.766168 118442 storage/replica_proposal.go:214  [n1,s1,r26/1:/{Table/55-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:05.926444 120979 storage/replica_command.go:280  [split,n1,s1,r27/1:/{Table/56-Max}] initiating a split of this range at key /Table/57 [r28]
I180815 06:00:06.388078 120953 storage/replica_range_lease.go:554  [replicate,n1,s1,r11/1:/Table/1{4-5}] transferring lease to s2
I180815 06:00:06.660810 120953 storage/replica_range_lease.go:617  [replicate,n1,s1,r11/1:/Table/1{4-5}] done transferring lease to s2: <nil>
I180815 06:00:06.696534 118975 storage/replica_proposal.go:214  [n2,s2,r11/2:/Table/1{4-5}] new range lease repl=(n2,s2):2 seq=4 start=1534312806.388365279,0 epo=1 pro=1534312806.388418277,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.617185946,0
I180815 06:00:07.305799 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
I180815 06:00:08.561401 118415 storage/replica_proposal.go:214  [n1,s1,r27/1:/{Table/56-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:08.706162 120993 storage/replica_command.go:280  [split,n1,s1,r28/1:/{Table/57-Max}] initiating a split of this range at key /Table/58 [r29]
I180815 06:00:09.584489 120976 storage/replica_range_lease.go:554  [replicate,n1,s1,r16/1:/Table/{19-20}] transferring lease to s2
I180815 06:00:10.204297 120976 storage/replica_range_lease.go:617  [replicate,n1,s1,r16/1:/Table/{19-20}] done transferring lease to s2: <nil>
I180815 06:00:10.374937 118985 storage/replica_proposal.go:214  [n2,s2,r16/3:/Table/{19-20}] new range lease repl=(n2,s2):3 seq=4 start=1534312809.584821552,0 epo=1 pro=1534312809.584860388,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.263092547,0
I180815 06:00:11.054011 118432 storage/replica_proposal.go:214  [n1,s1,r28/1:/{Table/57-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:11.120461 121144 storage/replica_command.go:280  [split,n1,s1,r29/1:/{Table/58-Max}] initiating a split of this range at key /Table/59 [r30]
I180815 06:00:12.006178 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
I180815 06:00:12.314597 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 636 goroutines, 64 MiB/39 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 527.53cgo/sec, 0.95/0.05 %(u/s)time, 0.03 %gc (7x)
W180815 06:00:12.667474 118504 storage/node_liveness.go:558  [n1,hb] slow heartbeat took 1.7s
W180815 06:00:12.752203 118411 storage/store.go:3747  [n1,s1] handle raft ready: 0.5s [processed=0]
I180815 06:00:12.855578 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
W180815 06:00:13.130339 119426 storage/node_liveness.go:558  [n3,hb] slow heartbeat took 1.5s
I180815 06:00:13.976763 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
I180815 06:00:14.105267 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 631 goroutines, 51 MiB/52 MiB/125 MiB GO alloc/idle/total, 100 MiB/146 MiB CGO alloc/total, 463.58cgo/sec, 0.94/0.04 %(u/s)time, 0.04 %gc (7x)
I180815 06:00:14.509871 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:14.898584 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:14.906566 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 635 goroutines, 78 MiB/26 MiB/125 MiB GO alloc/idle/total, 101 MiB/146 MiB CGO alloc/total, 432.60cgo/sec, 0.94/0.04 %(u/s)time, 0.04 %gc (8x)
W180815 06:00:15.596183 119392 storage/store.go:3747  [n3,s3] handle raft ready: 0.5s [processed=0]
W180815 06:00:16.019715 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 1.7s
I180815 06:00:17.108312 121236 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n3,s3,r24/3:/Table/5{3-4}] connection to n3 established
W180815 06:00:17.286234 118432 storage/store.go:3747  [n1,s1] handle raft ready: 0.8s [processed=0]
W180815 06:00:17.358315 119426 storage/node_liveness.go:558  [n3,hb] slow heartbeat took 1.2s
W180815 06:00:20.665958 118447 storage/store.go:3747  [n1,s1] handle raft ready: 0.6s [processed=0]
I180815 06:00:20.693671 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
W180815 06:00:21.002301 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 2.2s
W180815 06:00:21.527906 118504 storage/node_liveness.go:558  [n1,hb] slow heartbeat took 1.5s
I180815 06:00:22.375954 118436 storage/replica_proposal.go:214  [n1,s1,r29/1:/{Table/58-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:22.446925 121304 storage/replica_command.go:280  [split,n1,s1,r30/1:/{Table/59-Max}] initiating a split of this range at key /Table/60 [r31]
I180815 06:00:22.587999 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 642 goroutines, 75 MiB/30 MiB/125 MiB GO alloc/idle/total, 105 MiB/150 MiB CGO alloc/total, 188.84cgo/sec, 0.95/0.05 %(u/s)time, 0.09 %gc (10x)
I180815 06:00:22.877046 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:23.971732 121217 storage/replica_range_lease.go:554  [replicate,n1,s1,r27/1:/Table/5{6-7}] transferring lease to s2
I180815 06:00:24.087444 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 647 goroutines, 63 MiB/42 MiB/125 MiB GO alloc/idle/total, 105 MiB/150 MiB CGO alloc/total, 272.29cgo/sec, 0.97/0.04 %(u/s)time, 0.09 %gc (9x)
I180815 06:00:24.188085 119008 storage/replica_proposal.go:214  [n2,s2,r27/2:/Table/5{6-7}] new range lease repl=(n2,s2):2 seq=4 start=1534312823.972032754,0 epo=1 pro=1534312823.972114705,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:24.281219 121217 storage/replica_range_lease.go:617  [replicate,n1,s1,r27/1:/Table/5{6-7}] done transferring lease to s2: <nil>
I180815 06:00:24.511082 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 643 goroutines, 49 MiB/54 MiB/125 MiB GO alloc/idle/total, 105 MiB/150 MiB CGO alloc/total, 319.51cgo/sec, 0.98/0.04 %(u/s)time, 0.08 %gc (8x)
I180815 06:00:24.566409 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:24.618246 118424 storage/replica_proposal.go:214  [n1,s1,r30/1:/{Table/59-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:24.768017 121355 storage/replica_command.go:280  [split,n1,s1,r31/1:/{Table/60-Max}] initiating a split of this range at key /Table/61 [r32]
I180815 06:00:24.860405 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:26.620642 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
I180815 06:00:27.688714 118406 storage/replica_proposal.go:214  [n1,s1,r31/1:/{Table/60-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:27.879818 121432 storage/replica_command.go:280  [split,n1,s1,r32/1:/{Table/61-Max}] initiating a split of this range at key /Table/62 [r33]
I180815 06:00:28.057947 121424 storage/replica_range_lease.go:554  [replicate,n1,s1,r31/1:/Table/6{0-1}] transferring lease to s2
I180815 06:00:28.355923 121424 storage/replica_range_lease.go:617  [replicate,n1,s1,r31/1:/Table/6{0-1}] done transferring lease to s2: <nil>
I180815 06:00:28.448342 119000 storage/replica_proposal.go:214  [n2,s2,r31/2:/Table/6{0-1}] new range lease repl=(n2,s2):2 seq=4 start=1534312828.058261132,0 epo=1 pro=1534312828.058336284,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
W180815 06:00:29.108481 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 1.3s
W180815 06:00:29.493305 119006 storage/store.go:3747  [n2,s2] handle raft ready: 0.5s [processed=0]
W180815 06:00:29.887062 119383 storage/store.go:3747  [n3,s3] handle raft ready: 0.6s [processed=0]
I180815 06:00:30.139388 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
W180815 06:00:30.168187 118393 storage/store.go:3747  [n1,s1] handle raft ready: 0.5s [processed=0]
W180815 06:00:30.825876 118393 storage/store.go:3747  [n1,s1] handle raft ready: 0.6s [processed=0]
W180815 06:00:31.017405 119006 storage/store.go:3747  [n2,s2] handle raft ready: 0.6s [processed=0]
I180815 06:00:32.327245 119037 server/status/runtime.go:424  [n2] runtime stats: 1.4 GiB RSS, 644 goroutines, 68 MiB/35 MiB/125 MiB GO alloc/idle/total, 114 MiB/159 MiB CGO alloc/total, 423.44cgo/sec, 0.97/0.04 %(u/s)time, 0.06 %gc (7x)
W180815 06:00:32.481285 118504 storage/node_liveness.go:558  [n1,hb] slow heartbeat took 3.5s
W180815 06:00:32.784804 119426 storage/node_liveness.go:558  [n3,hb] slow heartbeat took 3.2s
I180815 06:00:32.867993 119039 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:32.935129 119062 storage/node_liveness.go:817  [n2,hb] retrying liveness update after storage.errRetryLiveness: TransactionStatusError: could not commit in one phase as requested (REASON_UNKNOWN)
W180815 06:00:33.779756 119062 storage/node_liveness.go:558  [n2,hb] slow heartbeat took 1.5s
I180815 06:00:34.245037 119417 server/status/runtime.go:424  [n3] runtime stats: 1.4 GiB RSS, 645 goroutines, 74 MiB/30 MiB/125 MiB GO alloc/idle/total, 114 MiB/159 MiB CGO alloc/total, 361.36cgo/sec, 0.96/0.04 %(u/s)time, 0.07 %gc (8x)
I180815 06:00:34.567549 119419 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:34.626856 118463 server/status/runtime.go:424  [n1] runtime stats: 1.4 GiB RSS, 640 goroutines, 50 MiB/53 MiB/125 MiB GO alloc/idle/total, 114 MiB/159 MiB CGO alloc/total, 345.50cgo/sec, 0.95/0.05 %(u/s)time, 0.07 %gc (8x)
I180815 06:00:34.854219 118465 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180815 06:00:35.686166 118450 storage/replica_proposal.go:214  [n1,s1,r32/1:/{Table/61-Max}] new range lease repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:35.714288 121568 storage/replica_command.go:280  [split,n1,s1,r33/1:/{Table/62-Max}] initiating a split of this range at key /Table/63 [r34]
I180815 06:00:35.727888 121610 storage/replica_range_lease.go:554  [replicate,n1,s1,r32/1:/Table/6{1-2}] transferring lease to s3
I180815 06:00:36.103779 119348 storage/replica_proposal.go:214  [n3,s3,r32/3:/Table/6{1-2}] new range lease repl=(n3,s3):3 seq=4 start=1534312835.868619392,0 epo=1 pro=1534312835.868669416,0 following repl=(n1,s1):1 seq=3 start=1534312734.409514512,0 epo=1 pro=1534312743.779399679,0
I180815 06:00:36.129799 121610 storage/replica_range_lease.go:617  [replicate,n1,s1,r32/1:/Table/6{1-2}] done transferring lease to s3: <nil>
I180815 06:00:37.094327 118204 storage/closed_timestamp_test.go:62  SucceedsSoon: sql: no rows in result set
I180815 06:00:37.102967 121583 util/stop/stopper.go:537  quiescing; tasks left:
1      ts.poller: poll
1      node.Node: batch
I180815 06:00:37.105904 121581 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
1      ts.poller: poll
1      node.Node: writing summary
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] kv.TxnCoordSender: heartbeat loop
I180815 06:00:37.108333 121582 util/stop/stopper.go:537  quiescing; tasks left:
1      ts.poller: poll
W180815 06:00:37.114225 119518 storage/raft_transport.go:574  [n1] while processing outgoing Raft queue to node 3: rpc error: code = Unavailable desc = transport is closing:
W180815 06:00:37.243643 119038 ts/db.go:195  [n2,ts-poll] error writing time series data: failed to send RPC: sending to all 3 replicas failed; last error: <nil> context canceled
W180815 06:00:37.245329 121568 internal/client/txn.go:514  [split,n1,s1,r33/1:/{Table/62-Max}] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
E180815 06:00:37.254653 121568 storage/queue.go:788  [split,n1,s1,r33/1:/{Table/62-Max}] unable to split [n1,s1,r33/1:/{Table/62-Max}] at key "/Table/63": split at key /Table/63 failed: node unavailable; try another peer
I180815 06:00:37.255146 121581 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
1      ts.poller: poll
1      node.Node: writing summary
1      [async] transport racer
1      [async] kv.TxnCoordSender: heartbeat loop
W180815 06:00:37.138868 120201 storage/raft_transport.go:574  [n3] while processing outgoing Raft queue to node 2: rpc error: code = Canceled desc = grpc: the client connection is closing:
W180815 06:00:37.161394 119295 gossip/infostore.go:303  [n3] node unavailable; try another peer
W180815 06:00:37.165995 120203 storage/raft_transport.go:574  [n2] while processing outgoing Raft queue to node 3: rpc error: code = Canceled desc = grpc: the client connection is closing:
W180815 06:00:37.196165 119490 storage/raft_transport.go:574  [n2] while processing outgoing Raft queue to node 1: EOF:
W180815 06:00:37.246540 118931 gossip/gossip.go:1351  [n2] no incoming or outgoing connections
W180815 06:00:37.252228 118464 ts/db.go:195  [n1,ts-poll] error writing time series data: node unavailable; try another peer
W180815 06:00:37.258281 118464 ts/db.go:198  [n1,ts-poll] node unavailable; try another peer
I180815 06:00:37.258636 121581 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
1      node.Node: writing summary
1      [async] transport racer
1      [async] kv.TxnCoordSender: heartbeat loop
W180815 06:00:37.260814 119062 internal/client/txn.go:514  [n2,hb] failure aborting transaction: node unavailable; try another peer; abort caused by: result is ambiguous (error=context canceled [exhausted])
I180815 06:00:37.261117 119062 storage/node_liveness.go:817  [n2,hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (error=context canceled [exhausted])
W180815 06:00:37.262556 119062 internal/client/txn.go:514  [n2,hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
W180815 06:00:37.306631 119062 storage/node_liveness.go:494  [n2,hb] failed node liveness heartbeat: node unavailable; try another peer
I180815 06:00:37.335980 121583 util/stop/stopper.go:537  quiescing; tasks left:
1      ts.poller: poll
I180815 06:00:37.262872 118484 kv/transport_race.go:67  transport race promotion: ran 35 iterations on up to 3114 requests
I180815 06:00:37.264548 121581 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: writing summary
1      node.Node: batch
1      [async] transport racer
W180815 06:00:37.268386 119523 storage/raft_transport.go:574  [n1] while processing outgoing Raft queue to node 2: rpc error: code = Canceled desc = grpc: the client connection is closing:
I180815 06:00:37.339350 121581 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: writing summary
1      node.Node: batch
W180815 06:00:37.135662 119552 storage/raft_transport.go:574  [n3] while processing outgoing Raft queue to node 1: rpc error: code = Unavailable desc = transport is closing:
W180815 06:00:37.342128 119208 gossip/gossip.go:1351  [n3] no incoming or outgoing connections
W180815 06:00:37.575996 121676 rpc/nodedialer/nodedialer.go:89  [ct-client] unable to connect to n3: unable to dial n3: breaker open
I180815 06:00:37.758647 121581 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: writing summary
W180815 06:00:37.776085 121643 rpc/nodedialer/nodedialer.go:89  [ct-client] unable to connect to n1: unable to dial n1: breaker open
W180815 06:00:37.780266 121645 rpc/nodedialer/nodedialer.go:89  [ct-client] unable to connect to n2: unable to dial n2: breaker open
W180815 06:00:37.841726 121646 rpc/nodedialer/nodedialer.go:89  [ct-client] unable to connect to n3: unable to dial n3: breaker open
W180815 06:00:37.983827 121701 rpc/nodedialer/nodedialer.go:89  [ct-client] unable to connect to n1: unable to dial n1: breaker open
W180815 06:00:38.057535 118465 server/node.go:886  [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:queue.split.process.failure Value:1}]}
W180815 06:00:38.058632 118465 gossip/infostore.go:303  [n1] node unavailable; try another peer
W180815 06:00:38.137145 118465 server/node.go:857  [n1,summaries] error recording status summaries: node unavailable; try another peer
W180815 06:00:38.244431 121704 rpc/nodedialer/nodedialer.go:89  [ct-client] unable to connect to n2: unable to dial n2: breaker open
W180815 06:00:38.342430 119418 ts/db.go:195  [n3,ts-poll] error writing time series data: node unavailable; try another peer
E180815 06:00:38.353497 119444 jobs/registry.go:285  error while adopting jobs: adopt-job: node unavailable; try another peer
--- FAIL: TestClosedTimestampCanServe (104.78s)
	soon.go:49: condition failed to evaluate within 45s: sql: no rows in result set
		goroutine 118204 [running]:
		runtime/debug.Stack(0xa7a358200, 0xc422061980, 0x3a612e0)
			/usr/local/go/src/runtime/debug/stack.go:24 +0xb5
		github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3abfb00, 0xc420b7e000, 0xc4220618c0)
			/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
		github.com/cockroachdb/cockroach/pkg/storage_test.TestClosedTimestampCanServe(0xc420b7e000)
			/go/src/github.com/cockroachdb/cockroach/pkg/storage/closed_timestamp_test.go:62 +0x43b
		testing.tRunner(0xc420b7e000, 0x337c7f0)
			/usr/local/go/src/testing/testing.go:777 +0x16e
		created by testing.(*T).Run
			/usr/local/go/src/testing/testing.go:824 +0x565

@cockroach-teamcity cockroach-teamcity added this to the 2.1 milestone Aug 15, 2018
@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Aug 15, 2018
@tbg tbg added the A-kv-client Relating to the KV client and the KV interface. label Aug 21, 2018
@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/eb54cb65ec8da407c8ce5e971157bb1c03efd9e8

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stress TESTS=TestClosedTimestampCanServe PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=856259&tab=buildLog

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/eb54cb65ec8da407c8ce5e971157bb1c03efd9e8

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stress TESTS=TestClosedTimestampCanServe PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=856717&tab=buildLog

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/d6a4a0d6a7e84debc0195822f6cd196405c584c4

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stress TESTS=TestClosedTimestampCanServe PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=858463&tab=buildLog

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/a6e5d24201c45b49415d95be9b04d9d0a523e44c

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stress TESTS=TestClosedTimestampCanServe PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=859885&tab=buildLog

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/a6e5d24201c45b49415d95be9b04d9d0a523e44c

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stress TESTS=TestClosedTimestampCanServe PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=860345&tab=buildLog

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/dbf5d6ed987c2c240bcbf0bb2b514c8e2186f855

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stress TESTS=TestClosedTimestampCanServe PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=865713&tab=buildLog

@tbg tbg changed the title storage: TestClosedTimestampCanServe failed under stress storage: TestClosedTimestampCanServe failed under stress [skipped] Aug 30, 2018
tbg added a commit to tbg/cockroach that referenced this issue Sep 20, 2018
@tbg
Copy link
Member

tbg commented Sep 20, 2018

Closing in anticipation of #30456.

@tbg tbg closed this as completed Sep 20, 2018
craig bot pushed a commit that referenced this issue Sep 20, 2018
30405: roachtest: mark acceptance as stable r=petermattis a=tschottdorf

all of its subtests are already stable, but in running a test locally I
noticed that the top-level test was marked as passing as unstable. I'm
not sure, but this might mean that the top-level test would actually not
fail? Either way, better to mark it as stable explicitly.

We should also spend some thought on how diverging notions of Stable in
sub vs top level test are treated, not sure that this is well-defined.

Release note: None

30446: opt: fix panic when srf used with GROUP BY r=rytaft a=rytaft

Instead of panicking, we now throw an appropriate error.

Fixes #30412

Release note (bug fix): Fixed a panic that occurred when a
generator function such as unnest was used in the SELECT list
in the presence of GROUP BY.

30450: roachtest: remove now-unnecessary hack r=petermattis a=tschottdorf

Closes #27717.

Release note: None

30451: storage: give TestReplicateRemovedNodeDisruptiveElection more time r=petermattis a=tschottdorf

Perhaps:

Fixes #27253.

Release note: None

30452: storage: de-flake TestReplicaIDChangePending r=petermattis a=tschottdorf

setReplicaID refreshes the proposal and was thus synchronously writing
to the commandProposed chan. This channel could have filled up due to
an earlier reproposal already, deadlocking the test.

Fixes #28132.

Release note: None

30455: testcluster: improve AddReplicas check r=petermattis a=tschottdorf

AddReplicas was verifying that a replica had indeed been added, but
there's no guarantee that the replicate queue wouldn't have removed
it in the meantime. Attempt to work around this somewhat. The real
solution is not to provide that guarantee, but some tests likely
rely on it (and the failure is extremely rare, i.e. the new for
loop basically never runs).

Observed in #28368.

Release note: None

30456: storage: unskip TestClosedTimestampCanServe for non-race r=petermattis a=tschottdorf

Fixes #28607.

Release note: None

Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com>
Co-authored-by: Rebecca Taft <becca@cockroachlabs.com>
tbg added a commit to tbg/cockroach that referenced this issue Sep 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-kv-client Relating to the KV client and the KV interface. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants