Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: flaky crud/TestCrudGenerateData #288

Closed
oleg-jukovec opened this issue Apr 25, 2023 · 4 comments · Fixed by #292
Closed

test: flaky crud/TestCrudGenerateData #288

oleg-jukovec opened this issue Apr 25, 2023 · 4 comments · Fixed by #292
Assignees
Labels
bug Something isn't working teamE

Comments

@oleg-jukovec
Copy link
Collaborator

2023-04-25T19:23:49.4163123Z --- FAIL: TestCrudGenerateData (0.01s)
2023-04-25T19:23:49.4163662Z     --- PASS: TestCrudGenerateData/Insert (0.00s)
2023-04-25T19:23:49.4164227Z     --- FAIL: TestCrudGenerateData/InsertObject (0.00s)
2023-04-25T19:23:49.4167665Z         tarantool_test.go:467: Failed to perform CRUD request on CRUD side: map[interface {}]interface {}{"class_name":"InsertError", "err":"Failed to insert: Duplicate key exists in unique index \"primary_index\" in space \"test\" with old tuple - [1019, 222, \"bla\"] and new tuple - [1019, 222, \"bla\"]", "file":"...ool/crud/testdata/.rocks/share/tarantool/crud/insert.lua", "line":0x70, "str":"InsertError: Failed to insert: Duplicate key exists in unique index \"primary_index\" in space \"test\" with old tuple - [1019, 222, \"bla\"] and new tuple - [1019, 222, \"bla\"]"}

log.txt

@oleg-jukovec oleg-jukovec added bug Something isn't working 1sp labels Apr 25, 2023
@oleg-jukovec
Copy link
Collaborator Author

I have such error on CI:

2023-05-11T06:50:37.5520550Z === RUN   TestCrudGenerateData/InsertObject
2023-05-11T06:50:37.5521470Z --- FAIL: TestCrudGenerateData (1.80s)
2023-05-11T06:50:37.5522210Z     --- PASS: TestCrudGenerateData/Insert (0.63s)
2023-05-11T06:50:37.5522810Z     --- FAIL: TestCrudGenerateData/InsertObject (1.14s)
2023-05-11T06:50:37.5553390Z         tarantool_test.go:467: Failed to Do CRUD request: client timeout for request 32 (0x4003)

@oleg-jukovec
Copy link
Collaborator Author

2023-05-11T07:39:11.0596604Z === RUN   TestCrudGenerateData
2023-05-11T07:39:11.0605354Z --- FAIL: TestCrudGenerateData (0.01s)
2023-05-11T07:39:11.0606224Z     tarantool_test.go:457: Failed to connect: failed to read greeting: read tcp 127.0.0.1:50188->127.0.0.1:3013: read: connection reset by peer

@oleg-jukovec oleg-jukovec self-assigned this May 11, 2023
@oleg-jukovec
Copy link
Collaborator Author

2023-05-12T22:22:52.4457910Z === RUN   TestRequestsSetters/StatsRequest
2023-05-12T22:22:52.4458255Z --- PASS: TestRequestsSetters (0.00s)
2023-05-12T22:22:52.4458674Z     --- PASS: TestRequestsSetters/InsertRequest (0.00s)
2023-05-12T22:22:52.4459132Z     --- PASS: TestRequestsSetters/InsertObjectRequest (0.00s)
2023-05-12T22:22:52.4459600Z     --- PASS: TestRequestsSetters/InsertManyRequest (0.00s)
2023-05-12T22:22:52.4460081Z     --- PASS: TestRequestsSetters/InsertObjectManyRequest (0.00s)
2023-05-12T22:22:52.4460530Z     --- PASS: TestRequestsSetters/GetRequest (0.00s)
2023-05-12T22:22:52.4460963Z     --- PASS: TestRequestsSetters/UpdateRequest (0.00s)
2023-05-12T22:22:52.4461389Z     --- PASS: TestRequestsSetters/DeleteRequest (0.00s)
2023-05-12T22:22:52.4461825Z     --- PASS: TestRequestsSetters/ReplaceRequest (0.00s)
2023-05-12T22:22:52.4462298Z     --- PASS: TestRequestsSetters/ReplaceObjectRequest (0.00s)
2023-05-12T22:22:52.4462771Z     --- PASS: TestRequestsSetters/ReplaceManyRequest (0.00s)
2023-05-12T22:22:52.4463253Z     --- PASS: TestRequestsSetters/ReplaceObjectManyRequest (0.00s)
2023-05-12T22:22:52.4463709Z     --- PASS: TestRequestsSetters/UpsertRequest (0.00s)
2023-05-12T22:22:52.4464165Z     --- PASS: TestRequestsSetters/UpsertObjectRequest (0.00s)
2023-05-12T22:22:52.4464634Z     --- PASS: TestRequestsSetters/UpsertManyRequest (0.00s)
2023-05-12T22:22:52.4465112Z     --- PASS: TestRequestsSetters/UpsertObjectManyRequest (0.00s)
2023-05-12T22:22:52.4465552Z     --- PASS: TestRequestsSetters/SelectRequest (0.00s)
2023-05-12T22:22:52.4465971Z     --- PASS: TestRequestsSetters/MinRequest (0.00s)
2023-05-12T22:22:52.4466390Z     --- PASS: TestRequestsSetters/MaxRequest (0.00s)
2023-05-12T22:22:52.4466825Z     --- PASS: TestRequestsSetters/TruncateRequest (0.00s)
2023-05-12T22:22:52.4467242Z     --- PASS: TestRequestsSetters/LenRequest (0.00s)
2023-05-12T22:22:52.4467671Z     --- PASS: TestRequestsSetters/CountRequest (0.00s)
2023-05-12T22:22:52.4468134Z     --- PASS: TestRequestsSetters/StorageInfoRequest (0.00s)
2023-05-12T22:22:52.4468570Z     --- PASS: TestRequestsSetters/StatsRequest (0.00s)
2023-05-12T22:22:52.4468856Z === RUN   TestCrudGenerateData
2023-05-12T22:22:52.4469161Z --- FAIL: TestCrudGenerateData (0.00s)
2023-05-12T22:22:52.4469911Z     tarantool_test.go:457: Failed to connect: failed to read greeting: read tcp 127.0.0.1:37264->127.0.0.1:3013: read: connection reset by peer
2023-05-12T22:22:52.4470279Z === RUN   TestCrudProcessData

@oleg-jukovec
Copy link
Collaborator Author

oleg-jukovec commented May 13, 2023

The reason is replicaset configuration on the same port as TEST_TNT_LISTEN, pay attention to the last three lines:

2023-05-13T21:46:58.9015370Z 2023-05-13 21:45:09.389 [8038] main/103/config.lua I> "tuple.keydef" module is not found. Built-in "key_def" is used
2023-05-13T21:46:58.9016092Z 2023-05-13 21:45:09.391 [8038] main/103/config.lua I> "tuple.merger" module is not found. Built-in "merger" is used
2023-05-13T21:46:58.9016695Z 2023-05-13 21:45:09.392 [8038] main/103/config.lua C> Tarantool 2.8.4-0-g47e6bd362
2023-05-13T21:46:58.9017225Z 2023-05-13 21:45:09.393 [8038] main/103/config.lua C> log level 5
2023-05-13T21:46:58.9017715Z 2023-05-13 21:45:09.393 [8038] main/103/config.lua I> wal/engine cleanup is paused
2023-05-13T21:46:58.9018297Z 2023-05-13 21:45:09.393 [8038] main/103/config.lua I> mapping 268435456 bytes for memtx tuple arena...
2023-05-13T21:46:58.9018997Z 2023-05-13 21:45:09.393 [8038] main/103/config.lua I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
2023-05-13T21:46:58.9019635Z 2023-05-13 21:45:09.393 [8038] main/103/config.lua I> mapping 134217728 bytes for vinyl tuple arena...
2023-05-13T21:46:58.9020355Z 2023-05-13 21:45:09.398 [8038] main/103/config.lua I> instance uuid ab4989b8-b95c-4f5d-afd0-06e64c95a714
2023-05-13T21:46:58.9021220Z 2023-05-13 21:45:09.398 [8038] main/103/config.lua I> tx_binary: stopped
2023-05-13T21:46:58.9021749Z 2023-05-13 21:45:09.398 [8038] main/103/config.lua I> initializing an empty data directory
2023-05-13T21:46:58.9022342Z 2023-05-13 21:45:09.532 [8038] main/103/config.lua I> assigned id 1 to replica ab4989b8-b95c-4f5d-afd0-06e64c95a714
2023-05-13T21:46:58.9022948Z 2023-05-13 21:45:09.532 [8038] main/103/config.lua I> cluster uuid d7c075d6-5254-413e-93ae-4a3a09a37244
2023-05-13T21:46:58.9023530Z 2023-05-13 21:45:09.534 [8038] snapshot/101/main I> saving snapshot `./00000000000000000000.snap.inprogress'
2023-05-13T21:46:58.9023996Z 2023-05-13 21:45:09.536 [8038] snapshot/101/main I> done
2023-05-13T21:46:58.9024460Z 2023-05-13 21:45:09.537 [8038] main/103/config.lua I> ready to accept requests
2023-05-13T21:46:58.9024941Z 2023-05-13 21:45:09.537 [8038] main/104/gc I> wal/engine cleanup is resumed
2023-05-13T21:46:58.9025607Z 2023-05-13 21:45:09.537 [8038] main/103/config.lua I> set 'log_level' configuration option to 5
2023-05-13T21:46:58.9026227Z 2023-05-13 21:45:09.537 [8038] main/105/checkpoint_daemon I> scheduled next checkpoint for Sat May 13 23:19:22 2023
2023-05-13T21:46:58.9026839Z 2023-05-13 21:45:09.537 [8038] main/103/config.lua I> set 'log_format' configuration option to "plain"
2023-05-13T21:46:58.9027513Z 2023-05-13 21:45:09.538 [8038] main/103/config.lua I> Starting configuration of replica ab4989b8-b95c-4f5d-afd0-06e64c95a714
2023-05-13T21:46:58.9028049Z 2023-05-13 21:45:09.538 [8038] main/103/config.lua I> I am master
2023-05-13T21:46:58.9028672Z 2023-05-13 21:45:09.538 [8038] main/103/config.lua I> Taking on replicaset master role...
2023-05-13T21:46:58.9029160Z 2023-05-13 21:45:09.538 [8038] main/103/config.lua I> tx_binary: stopped
2023-05-13T21:46:58.9029664Z 2023-05-13 21:45:09.538 [8038] main/103/config.lua I> tx_binary: bound to 127.0.0.1:3013
2023-05-13T21:46:58.9030227Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> set 'listen' configuration option to "guest@127.0.0.1:3013"
2023-05-13T21:46:58.9030760Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> connecting to 1 replicas
2023-05-13T21:46:58.9031656Z 2023-05-13 21:45:09.539 [8038] main/118/applier/guest@127.0.0.1:3013 I> remote master ab4989b8-b95c-4f5d-afd0-06e64c95a714 at 127.0.0.1:3013 running Tarantool 2.8.4
2023-05-13T21:46:58.9032262Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> connected to 1 replicas
2023-05-13T21:46:58.9032765Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> synchronizing with 1 replicas
2023-05-13T21:46:58.9033291Z 2023-05-13 21:45:09.539 [8038] main/118/applier/guest@127.0.0.1:3013 C> leaving orphan mode
2023-05-13T21:46:58.9033788Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> replica set sync complete
2023-05-13T21:46:58.9034247Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua C> leaving orphan mode
2023-05-13T21:46:58.9034837Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> set 'replication' configuration option to ["guest@127.0.0.1:3013"]
2023-05-13T21:46:58.9035376Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> Box has been configured
2023-05-13T21:46:58.9036021Z 2023-05-13 21:45:09.539 [8038] main/103/config.lua I> Initializing schema {0.1.15.0}
2023-05-13T21:46:58.9036560Z 2023-05-13 21:45:09.543 [8038] main/103/config.lua I> Upgrade vshard schema to {0.1.16.0}
2023-05-13T21:46:58.9037109Z 2023-05-13 21:45:09.543 [8038] main/103/config.lua I> Insert 'vshard_version' into _schema
2023-05-13T21:46:58.9037655Z 2023-05-13 21:45:09.544 [8038] main/103/config.lua I> Create function vshard.storage._call()
2023-05-13T21:46:58.9038240Z 2023-05-13 21:45:09.544 [8038] main/103/config.lua I> Successful vshard schema upgrade to {0.1.16.0}
2023-05-13T21:46:58.9038873Z 2023-05-13 21:45:09.544 [8038] main/119/vshard.gc I> gc_bucket_f has been started
2023-05-13T21:46:58.9039381Z 2023-05-13 21:45:09.544 [8038] main/120/vshard.recovery I> recovery_f has been started
2023-05-13T21:46:58.9040059Z 2023-05-13 21:45:09.544 [8038] main/103/config.lua I> Took on replicaset master role
2023-05-13T21:46:58.9040845Z 2023-05-13 21:45:09.545 [8038] main/121/vshard.rebalancer I> rebalancer_f has been started
2023-05-13T21:46:58.9041381Z 2023-05-13 21:45:09.548 [8038] main/103/config.lua I> Starting router configuration
2023-05-13T21:46:58.9041850Z 2023-05-13 21:45:09.548 [8038] main/103/config.lua I> Calling box.cfg()...
2023-05-13T21:46:58.9043472Z 2023-05-13 21:45:09.548 [8038] main/103/config.lua I> Box has been configured
2023-05-13T21:46:58.9044083Z 2023-05-13 21:45:09.550 [8038] main/123/127.0.0.1:3013 (net.box) I> connected to 127.0.0.1:3013
2023-05-13T21:46:58.9044635Z 2023-05-13 21:45:09.550 [8038] main/124/vshard.failover._static_router I> failover_f has been started
2023-05-13T21:46:58.9045606Z 2023-05-13 21:45:09.550 [8038] main/124/vshard.failover._static_router I> New replica storage(guest@127.0.0.1:3013) for replicaset(uuid="d7c075d6-5254-413e-93ae-4a3a09a37244", master=storage(guest@127.0.0.1:3013))
2023-05-13T21:46:58.9046316Z 2023-05-13 21:45:09.550 [8038] main/124/vshard.failover._static_router I> All replicas are ok
2023-05-13T21:46:58.9047021Z 2023-05-13 21:45:09.550 [8038] main/124/vshard.failover._static_router I> Failovering step is finished. Schedule next after 1.000000 seconds
2023-05-13T21:46:58.9047634Z 2023-05-13 21:45:09.551 [8038] main/122/127.0.0.1:3013 (net.box) I> connected to 127.0.0.1:3013
2023-05-13T21:46:58.9048188Z 2023-05-13 21:45:09.551 [8038] main/125/vshard.discovery._static_router I> discovery_f has been started
2023-05-13T21:46:58.9048977Z 2023-05-13 21:45:09.551 [8038] main/121/vshard.rebalancer I> Total active bucket count is not equal to total. Possibly a boostrap is not finished yet. Expected 300, but found 0
2023-05-13T21:46:58.9049802Z 2023-05-13 21:45:09.551 [8038] main/121/vshard.rebalancer I> Some buckets are not active, retry rebalancing later
2023-05-13T21:46:58.9050486Z 2023-05-13 21:45:09.556 [8038] main/103/config.lua I> Buckets from 1 to 300 are bootstrapped on "d7c075d6-5254-413e-93ae-4a3a09a37244"
2023-05-13T21:46:58.9051028Z 2023-05-13 21:45:09.557 [8038] main/103/config.lua I> tx_binary: stopped
2023-05-13T21:46:58.9051739Z 2023-05-13 21:45:09.557 [8038] main/103/config.lua I> tx_binary: bound to 127.0.0.1:3013
2023-05-13T21:46:58.9052344Z 2023-05-13 21:45:09.557 [8038] main/103/config.lua I> set 'listen' configuration option to "127.0.0.1:3013"
  1. These lines are executed in crud/config.lua:
local uri = 'guest@127.0.0.1:3013'
local cfg = {
    bucket_count = 300,
    sharding = {
        [box.info().cluster.uuid] = {
            replicas = {
                [box.info().uuid] = {
                    uri = uri,
                    name = 'storage',
                    master = true,
                },
            },
        },
    },
}
  1. An instance listen on 127.0.0.1:3013.
  2. The test suite successfully connects to the instance.
  3. These lines are executed in crud/config.lua:
-- Set listen only when every other thing is configured.
box.cfg{
    listen = os.getenv("TEST_TNT_LISTEN"), -- TEST_TNT_LISTEN == "127.0.0.1:3013"
}
  1. The instance reopens "127.0.0.1:3013".
  2. The connection from 3. lost.

oleg-jukovec added a commit that referenced this issue May 13, 2023
oleg-jukovec added a commit that referenced this issue May 13, 2023
An instance was listening on a testing port until the configuration
was complete. At the end of the configuration, the port was
reopened.

As a result, we saw connection loss in tests.

Closes #288
oleg-jukovec added a commit that referenced this issue May 15, 2023
oleg-jukovec added a commit that referenced this issue May 15, 2023
An instance was listening on a testing port until the configuration
was complete. At the end of the configuration, the port was
reopened.

As a result, we saw connection loss in tests.

Closes #288
oleg-jukovec added a commit that referenced this issue May 17, 2023
oleg-jukovec added a commit that referenced this issue May 17, 2023
An instance was listening on a testing port until the configuration
was complete. At the end of the configuration, the port was
reopened.

As a result, we saw connection loss in tests.

Closes #288
oleg-jukovec added a commit that referenced this issue May 18, 2023
oleg-jukovec added a commit that referenced this issue May 18, 2023
An instance was listening on a testing port until the configuration
was complete. At the end of the configuration, the port was
reopened.

As a result, we saw connection loss in tests.

Closes #288
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working teamE
Projects
None yet
2 participants