Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

线程池满问题排查 #838

Closed
tsgmq opened this issue Jun 1, 2022 · 3 comments · Fixed by #847
Closed

线程池满问题排查 #838

tsgmq opened this issue Jun 1, 2022 · 3 comments · Fixed by #847

Comments

@tsgmq
Copy link

tsgmq commented Jun 1, 2022

Your question

随着系统的持续运行,com.alipay.sofa.jraft.storage.impl.LogManagerImpl#waitMap 运行一段时间后,开始出现慢慢变大的迹象,最终出现线程池满,任务被拒绝的报错,帮忙看是否哪儿参数设置的不正确。当前业务场景属于tps低,logEntry大。20s一次raft持久化,一个包大概3M左右。
报错信息:
2022-06-01T07:30:37.835491Z ERROR [JRaft-NodeImpl-Disruptor-0] util.LogExceptionHandler - [,-]Handle NodeImpl disruptor event error, event is com.alipay.sofa.jraft.core.NodeImpl$LogEntryAndClosure@585bf3be
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@30c71409[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@39070b4b[Wrapped task = com.alipay.sofa.jraft.storage.impl.LogManagerImpl$$Lambda$2178/0x0000000801df28b8@61cf0671]] rejected from com.alipay.sofa.jraft.util.MetricThreadPoolExecutor@5014d2b5[Running, pool size = 100, active threads = 100, queued tasks = 0, completed tasks = 361064]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[?:?]
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[?:?]
at com.alipay.sofa.jraft.util.Utils.runInThread(Utils.java:172) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.alipay.sofa.jraft.storage.impl.LogManagerImpl.wakeupAllWaiter(LogManagerImpl.java:397) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.alipay.sofa.jraft.storage.impl.LogManagerImpl.appendEntries(LogManagerImpl.java:334) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.alipay.sofa.jraft.core.NodeImpl.executeApplyingTasks(NodeImpl.java:1401) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.alipay.sofa.jraft.core.NodeImpl.access$300(NodeImpl.java:140) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.alipay.sofa.jraft.core.NodeImpl$LogEntryAndClosureHandler.onEvent(NodeImpl.java:310) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.alipay.sofa.jraft.core.NodeImpl$LogEntryAndClosureHandler.onEvent(NodeImpl.java:290) ~[jraft-core-1.3.10.bugfix_2.jar!/:?]
at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) [disruptor-3.4.2.jar!/:?]
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) [disruptor-3.4.2.jar!/:?]
at java.lang.Thread.run(Thread.java:833) [?:?]

Describe your question clearly

Your scenes

Describe your use scenes (why need this feature)

Your advice

Describe the advice or solution you'd like

Environment

  • SOFAJRaft version:
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:
@tsgmq
Copy link
Author

tsgmq commented Jun 1, 2022

nodeId: <test-zone00/test04.zone00.test.net:7788::1>
state: STATE_LEADER
leaderId: test04.zone00.test.net:7788::1
term: 96
conf: ConfigurationEntry [id=LogId [index=463492461, term=96], conf=test04.zone00.test.net:7788,test03.zone00.sga.testnet.net:7788,test02.zone00.test.net:7788,test01.zone00.sgb.testnet.net:7788,test00.zone00.sga.testnet.net:7788, oldConf=]
targetPriority: -1
electionTimer:
RepeatedTimer{timeout=null, stopped=true, running=false, destroyed=false, invoking=false, timeoutMs=3000, name='JRaft-ElectionTimer-<test-zone00/test04.zone00.test.net:7788::1>'}
voteTimer:
RepeatedTimer{timeout=null, stopped=true, running=false, destroyed=false, invoking=false, timeoutMs=3000, name='JRaft-VoteTimer-<test-zone00/test04.zone00.test.net:7788::1>'}
stepDownTimer:
RepeatedTimer{timeout=HashedWheelTimeout(deadline: 106290540 ns later, task: com.alipay.sofa.jraft.util.RepeatedTimer$$Lambda$1120/0x00000008017d6200@4aff7415), stopped=false, running=true, destroyed=false, invoking=false, timeoutMs=1500, name='JRaft-StepDownTimer-<test-zone00/test04.zone00.test.net:7788::1>'}
snapshotTimer:
RepeatedTimer{timeout=HashedWheelTimeout(deadline: 520779749143 ns later, task: com.alipay.sofa.jraft.util.RepeatedTimer$$Lambda$1120/0x00000008017d6200@ffdf32e), stopped=false, running=true, destroyed=false, invoking=false, timeoutMs=600000, name='JRaft-SnapshotTimer-<test-zone00/test04.zone00.test.net:7788::1>'}
logManager:
storage: [463492404, 463492469]
diskId: LogId [index=463492469, term=96]
appliedId: LogId [index=463492469, term=96]
lastSnapshotId: LogId [index=463492461, term=96]
fsmCaller:
StateMachine [Idle]
ballotBox:
lastCommittedIndex: 463492469
pendingIndex: 463492470
pendingMetaQueueSize: 0
snapshotExecutor:
lastSnapshotTerm: 96
lastSnapshotIndex: 463492461
term: 95
savingSnapshot: false
loadingSnapshot: false
stopped: false
replicatorGroup:
replicators: [Replicator [state=Replicate, statInfo=<running=IDLE, firstLogIndex=463492469, lastLogIncluded=0, lastLogIndex=463492469, lastTermIncluded=0>, peerId=test01.zone00.sgb.testnet.net:7788, type=Follower], Replicator [state=Replicate, statInfo=<running=IDLE, firstLogIndex=463492469, lastLogIncluded=0, lastLogIndex=463492469, lastTermIncluded=0>, peerId=test00.zone00.sga.testnet.net:7788, type=Follower], Replicator [state=Replicate, statInfo=<running=IDLE, firstLogIndex=463492469, lastLogIncluded=0, lastLogIndex=463492469, lastTermIncluded=0>, peerId=test02.zone00.test.net:7788, type=Follower], Replicator [state=Replicate, statInfo=<running=IDLE, firstLogIndex=463492469, lastLogIncluded=0, lastLogIndex=463492469, lastTermIncluded=0>, peerId=test03.zone00.sga.testnet.net:7788, type=Follower]]
failureReplicators: {}
logStorage:

** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)

L0 7/0 145.90 MB 0.9 0.0 0.0 0.0 3.2 3.2 0.0 1.0 0.0 42.2 78.61 75.62 169 0.465 0 0 0.0 0.0
L1 0/0 0.00 KB 0.0 4.0 3.1 0.8 1.3 0.5 0.0 0.4 63.6 21.1 63.84 62.74 37 1.725 6963 4756 0.0 0.0
Sum 7/0 145.90 MB 0.0 4.0 3.1 0.8 4.6 3.7 0.0 1.4 28.5 32.8 142.45 138.36 206 0.691 6963 4756 0.0 0.0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 41.9 0.99 0.96 2 0.493 0 0 0.0 0.0

** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)

Low 0/0 0.00 KB 0.0 4.0 3.1 0.8 1.3 0.5 0.0 0.0 63.6 21.1 63.84 62.74 37 1.725 6963 4756 0.0 0.0
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 3.2 3.2 0.0 0.0 0.0 42.3 78.10 75.62 168 0.465 0 0 0.0 0.0
User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 41.2 0.51 0.00 1 0.508 0 0 0.0 0.0

Blob file count: 0, total size: 0.0 GB

Uptime(secs): 107379.1 total, 577.2 interval
Flush(GB): cumulative 3.243, interval 0.040
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 4.56 GB write, 0.04 MB/s write, 3.97 GB read, 0.04 MB/s read, 142.4 seconds
Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
Block cache LRUCache@0x7f8750724840 capacity: 8.00 MB collections: 180 last_copies: 0 last_secs: 2.9e-05 secs_since: 0
Block cache entry stats(count,size,portion): IndexBlock(19,14.39 KB,0.175667%) OtherBlock(8,1.33 KB,0.0162125%) Misc(1,0.00 KB,0%)

** File Read Latency Histogram By Level [default] **
** Level 0 read latency histogram (micros):
Count: 6363 Average: 104.1000 StdDev: 53.53
Min: 1 Median: 104.8271 Max: 512
Percentiles: P50: 104.83 P75: 142.36 P99: 241.82 P99.9: 323.28 P99.99: 512.00

[ 0, 1 ] 3 0.047% 0.047%
( 1, 2 ] 224 3.520% 3.567% #
( 2, 3 ] 336 5.281% 8.848% #
( 3, 4 ] 143 2.247% 11.095%
( 4, 6 ] 192 3.017% 14.113% #
( 6, 10 ] 26 0.409% 14.521%
( 10, 15 ] 1 0.016% 14.537%
( 15, 22 ] 1 0.016% 14.553%
( 34, 51 ] 6 0.094% 14.647%
( 51, 76 ] 391 6.145% 20.792% #
( 76, 110 ] 2192 34.449% 55.241% #######
( 110, 170 ] 2331 36.634% 91.875% #######
( 170, 250 ] 505 7.937% 99.811% ##
( 250, 380 ] 10 0.157% 99.969%
( 380, 580 ] 2 0.031% 100.000%

** Level 1 read latency histogram (micros):
Count: 1733 Average: 104.1437 StdDev: 54.79
Min: 2 Median: 105.0876 Max: 369
Percentiles: P50: 105.09 P75: 143.46 P99: 241.52 P99.9: 249.62 P99.99: 357.47

( 1, 2 ] 82 4.732% 4.732% #
( 2, 3 ] 62 3.578% 8.309% #
( 3, 4 ] 44 2.539% 10.848% #
( 4, 6 ] 63 3.635% 14.484% #
( 6, 10 ] 6 0.346% 14.830%
( 34, 51 ] 8 0.462% 15.291%
( 51, 76 ] 113 6.520% 21.812% #
( 76, 110 ] 571 32.949% 54.761% #######
( 110, 170 ] 629 36.295% 91.056% #######
( 170, 250 ] 154 8.886% 99.942% ##
( 250, 380 ] 1 0.058% 100.000%

** DB Stats **
Uptime(secs): 107379.1 total, 577.2 interval
Cumulative writes: 6273 writes, 6336 keys, 6273 commit groups, 1.0 writes per commit group, ingest: 10.63 GB, 0.10 MB/s
Cumulative WAL: 6273 writes, 0 syncs, 6273.00 writes per sync, written: 10.63 GB, 0.10 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 55 writes, 55 keys, 55 commit groups, 1.0 writes per commit group, ingest: 100.30 MB, 0.17 MB/s
Interval WAL: 55 writes, 0 syncs, 55.00 writes per sync, written: 0.10 GB, 0.17 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

rocksdb.block.cache.miss COUNT : 7266
rocksdb.block.cache.hit COUNT : 6
rocksdb.block.cache.add COUNT : 418
rocksdb.block.cache.add.failures COUNT : 0
rocksdb.block.cache.index.miss COUNT : 256
rocksdb.block.cache.index.hit COUNT : 6
rocksdb.block.cache.index.add COUNT : 228
rocksdb.block.cache.index.bytes.insert COUNT : 184048
rocksdb.block.cache.index.bytes.evict COUNT : 0
rocksdb.block.cache.filter.miss COUNT : 0
rocksdb.block.cache.filter.hit COUNT : 0
rocksdb.block.cache.filter.add COUNT : 0
rocksdb.block.cache.filter.bytes.insert COUNT : 0
rocksdb.block.cache.filter.bytes.evict COUNT : 0
rocksdb.block.cache.data.miss COUNT : 7010
rocksdb.block.cache.data.hit COUNT : 0
rocksdb.block.cache.data.add COUNT : 190
rocksdb.block.cache.data.bytes.insert COUNT : 155459232
rocksdb.block.cache.bytes.read COUNT : 8816
rocksdb.block.cache.bytes.write COUNT : 155643280
rocksdb.bloom.filter.useful COUNT : 0
rocksdb.bloom.filter.full.positive COUNT : 0
rocksdb.bloom.filter.full.true.positive COUNT : 0
rocksdb.bloom.filter.micros COUNT : 0
rocksdb.persistent.cache.hit COUNT : 0
rocksdb.persistent.cache.miss COUNT : 0
rocksdb.sim.block.cache.hit COUNT : 0
rocksdb.sim.block.cache.miss COUNT : 0
rocksdb.memtable.hit COUNT : 274458
rocksdb.memtable.miss COUNT : 98
rocksdb.l0.hit COUNT : 96
rocksdb.l1.hit COUNT : 2
rocksdb.l2andup.hit COUNT : 0
rocksdb.compaction.key.drop.new COUNT : 4638
rocksdb.compaction.key.drop.obsolete COUNT : 238
rocksdb.compaction.key.drop.range_del COUNT : 4630
rocksdb.compaction.key.drop.user COUNT : 0
rocksdb.compaction.range_del.drop.obsolete COUNT : 238
rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0
rocksdb.compaction.cancelled COUNT : 0
rocksdb.number.keys.written COUNT : 6336
rocksdb.number.keys.read COUNT : 274556
rocksdb.number.keys.updated COUNT : 0
rocksdb.bytes.written COUNT : 11417042150
rocksdb.bytes.read COUNT : 497565661042
rocksdb.number.db.seek COUNT : 2
rocksdb.number.db.next COUNT : 1
rocksdb.number.db.prev COUNT : 0
rocksdb.number.db.seek.found COUNT : 2
rocksdb.number.db.next.found COUNT : 0
rocksdb.number.db.prev.found COUNT : 0
rocksdb.db.iter.bytes.read COUNT : 1792435
rocksdb.no.file.closes COUNT : 0
rocksdb.no.file.opens COUNT : 228
rocksdb.no.file.errors COUNT : 0
rocksdb.l0.slowdown.micros COUNT : 0
rocksdb.memtable.compaction.micros COUNT : 0
rocksdb.l0.num.files.stall.micros COUNT : 0
rocksdb.stall.micros COUNT : 0
rocksdb.db.mutex.wait.micros COUNT : 0
rocksdb.rate.limit.delay.millis COUNT : 0
rocksdb.num.iterators COUNT : 0
rocksdb.number.multiget.get COUNT : 0
rocksdb.number.multiget.keys.read COUNT : 0
rocksdb.number.multiget.bytes.read COUNT : 0
rocksdb.number.deletes.filtered COUNT : 0
rocksdb.number.merge.failures COUNT : 0
rocksdb.bloom.filter.prefix.checked COUNT : 0
rocksdb.bloom.filter.prefix.useful COUNT : 0
rocksdb.number.reseeks.iteration COUNT : 0
rocksdb.getupdatessince.calls COUNT : 0
rocksdb.block.cachecompressed.miss COUNT : 0
rocksdb.block.cachecompressed.hit COUNT : 0
rocksdb.block.cachecompressed.add COUNT : 0
rocksdb.block.cachecompressed.add.failures COUNT : 0
rocksdb.wal.synced COUNT : 0
rocksdb.wal.bytes COUNT : 11417042150
rocksdb.write.self COUNT : 6273
rocksdb.write.other COUNT : 0
rocksdb.write.timeout COUNT : 0
rocksdb.write.wal COUNT : 12546
rocksdb.compact.read.bytes COUNT : 4258749380
rocksdb.compact.write.bytes COUNT : 1434448519
rocksdb.flush.write.bytes COUNT : 3460137023
rocksdb.compact.read.marked.bytes COUNT : 0
rocksdb.compact.read.periodic.bytes COUNT : 0
rocksdb.compact.read.ttl.bytes COUNT : 0
rocksdb.compact.write.marked.bytes COUNT : 0
rocksdb.compact.write.periodic.bytes COUNT : 0
rocksdb.compact.write.ttl.bytes COUNT : 0
rocksdb.number.direct.load.table.properties COUNT : 0
rocksdb.number.superversion_acquires COUNT : 715
rocksdb.number.superversion_releases COUNT : 1
rocksdb.number.superversion_cleanups COUNT : 1
rocksdb.number.block.compressed COUNT : 7929
rocksdb.number.block.decompressed COUNT : 7155
rocksdb.number.block.not_compressed COUNT : 248
rocksdb.merge.operation.time.nanos COUNT : 0
rocksdb.filter.operation.time.nanos COUNT : 0
rocksdb.row.cache.hit COUNT : 0
rocksdb.row.cache.miss COUNT : 0
rocksdb.read.amp.estimate.useful.bytes COUNT : 0
rocksdb.read.amp.total.read.bytes COUNT : 0
rocksdb.number.rate_limiter.drains COUNT : 0
rocksdb.number.iter.skip COUNT : 1
rocksdb.blobdb.num.put COUNT : 0
rocksdb.blobdb.num.write COUNT : 0
rocksdb.blobdb.num.get COUNT : 0
rocksdb.blobdb.num.multiget COUNT : 0
rocksdb.blobdb.num.seek COUNT : 0
rocksdb.blobdb.num.next COUNT : 0
rocksdb.blobdb.num.prev COUNT : 0
rocksdb.blobdb.num.keys.written COUNT : 0
rocksdb.blobdb.num.keys.read COUNT : 0
rocksdb.blobdb.bytes.written COUNT : 0
rocksdb.blobdb.bytes.read COUNT : 0
rocksdb.blobdb.write.inlined COUNT : 0
rocksdb.blobdb.write.inlined.ttl COUNT : 0
rocksdb.blobdb.write.blob COUNT : 0
rocksdb.blobdb.write.blob.ttl COUNT : 0
rocksdb.blobdb.blob.file.bytes.written COUNT : 0
rocksdb.blobdb.blob.file.bytes.read COUNT : 0
rocksdb.blobdb.blob.file.synced COUNT : 0
rocksdb.blobdb.blob.index.expired.count COUNT : 0
rocksdb.blobdb.blob.index.expired.size COUNT : 0
rocksdb.blobdb.blob.index.evicted.count COUNT : 0
rocksdb.blobdb.blob.index.evicted.size COUNT : 0
rocksdb.blobdb.gc.num.files COUNT : 0
rocksdb.blobdb.gc.num.new.files COUNT : 0
rocksdb.blobdb.gc.failures COUNT : 0
rocksdb.blobdb.gc.num.keys.overwritten COUNT : 0
rocksdb.blobdb.gc.num.keys.expired COUNT : 0
rocksdb.blobdb.gc.num.keys.relocated COUNT : 0
rocksdb.blobdb.gc.bytes.overwritten COUNT : 0
rocksdb.blobdb.gc.bytes.expired COUNT : 0
rocksdb.blobdb.gc.bytes.relocated COUNT : 0
rocksdb.blobdb.fifo.num.files.evicted COUNT : 0
rocksdb.blobdb.fifo.num.keys.evicted COUNT : 0
rocksdb.blobdb.fifo.bytes.evicted COUNT : 0
rocksdb.txn.overhead.mutex.prepare COUNT : 0
rocksdb.txn.overhead.mutex.old.commit.map COUNT : 0
rocksdb.txn.overhead.duplicate.key COUNT : 0
rocksdb.txn.overhead.mutex.snapshot COUNT : 0
rocksdb.txn.get.tryagain COUNT : 0
rocksdb.number.multiget.keys.found COUNT : 0
rocksdb.num.iterator.created COUNT : 2
rocksdb.num.iterator.deleted COUNT : 2
rocksdb.block.cache.compression.dict.miss COUNT : 0
rocksdb.block.cache.compression.dict.hit COUNT : 0
rocksdb.block.cache.compression.dict.add COUNT : 0
rocksdb.block.cache.compression.dict.bytes.insert COUNT : 0
rocksdb.block.cache.compression.dict.bytes.evict COUNT : 0
rocksdb.block.cache.add.redundant COUNT : 0
rocksdb.block.cache.index.add.redundant COUNT : 0
rocksdb.block.cache.filter.add.redundant COUNT : 0
rocksdb.block.cache.data.add.redundant COUNT : 0
rocksdb.block.cache.compression.dict.add.redundant COUNT : 0
rocksdb.files.marked.trash COUNT : 0
rocksdb.files.deleted.immediately COUNT : 380
rocksdb.error.handler.bg.errro.count COUNT : 0
rocksdb.error.handler.bg.io.errro.count COUNT : 0
rocksdb.error.handler.bg.retryable.io.errro.count COUNT : 0
rocksdb.error.handler.autoresume.count COUNT : 0
rocksdb.error.handler.autoresume.retry.total.count COUNT : 0
rocksdb.error.handler.autoresume.success.count COUNT : 0
rocksdb.memtable.payload.bytes.at.flush COUNT : 11378657757
rocksdb.memtable.garbage.bytes.at.flush COUNT : 507882757
rocksdb.db.get.micros P50 : 342.505419 P95 : 555.978380 P99 : 578.547951 P100 : 7429.000000 COUNT : 274556 SUM : 97853990
rocksdb.db.write.micros P50 : 2733.147321 P95 : 4320.632230 P99 : 5878.375000 P100 : 8157.000000 COUNT : 6273 SUM : 16814321
rocksdb.compaction.times.micros P50 : 844736.842105 P95 : 3967500.000000 P99 : 4233500.000000 P100 : 4266147.000000 COUNT : 38 SUM : 63840975
rocksdb.compaction.times.cpu_micros P50 : 844736.842105 P95 : 3967500.000000 P99 : 4195443.000000 P100 : 4195443.000000 COUNT : 38 SUM : 62743356
rocksdb.subcompaction.setup.times.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.table.sync.micros P50 : 17943.262411 P95 : 27530.555556 P99 : 29970.000000 P100 : 29970.000000 COUNT : 179 SUM : 3276994
rocksdb.compaction.outfile.sync.micros P50 : 20400.000000 P95 : 67717.000000 P99 : 67717.000000 P100 : 67717.000000 COUNT : 46 SUM : 1392952
rocksdb.wal.file.sync.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.manifest.file.sync.micros P50 : 1127.755682 P95 : 1858.000000 P99 : 3255.000000 P100 : 4377.000000 COUNT : 229 SUM : 274292
rocksdb.table.open.io.micros P50 : 127.197452 P95 : 166.407643 P99 : 169.892994 P100 : 447.000000 COUNT : 228 SUM : 26618
rocksdb.db.multiget.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.read.block.compaction.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.read.block.get.micros P50 : 4035.719707 P95 : 6339.714912 P99 : 6580.750000 P100 : 8954.000000 COUNT : 7494 SUM : 32190249
rocksdb.write.raw.block.micros P50 : 742.630353 P95 : 1221.782764 P99 : 1284.936410 P100 : 1805.000000 COUNT : 8713 SUM : 5965058
rocksdb.l0.slowdown.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.memtable.compaction.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.num.files.stall.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.hard.rate.limit.delay.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.soft.rate.limit.delay.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.numfiles.in.singlecompaction P50 : 1.000000 P95 : 8.000000 P99 : 8.000000 P100 : 8.000000 COUNT : 38 SUM : 171
rocksdb.db.seek.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.db.write.stall P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.sst.read.micros P50 : 104.376402 P95 : 201.939302 P99 : 241.650379 P100 : 512.000000 COUNT : 8178 SUM : 843152
rocksdb.num.subcompactions.scheduled P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.bytes.per.read P50 : 1702250.316529 P95 : 2750758.827622 P99 : 2870151.765524 P100 : 2899494.000000 COUNT : 274556 SUM : 497565661042
rocksdb.bytes.per.write P50 : 1728012.684989 P95 : 2781680.561650 P99 : 2891781.483107 P100 : 6232127.000000 COUNT : 6273 SUM : 11417042150
rocksdb.bytes.per.multiget P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.bytes.compressed P50 : 1776271.377138 P95 : 2775290.972004 P99 : 2875058.194401 P100 : 2899524.000000 COUNT : 7929 SUM : 15390097227
rocksdb.bytes.decompressed P50 : 1734813.854854 P95 : 2761121.894410 P99 : 2872224.378882 P100 : 2899524.000000 COUNT : 7155 SUM : 13531672184
rocksdb.compression.times.nanos P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.decompression.times.nanos P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.read.num.merge_operands P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.key.size P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.value.size P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.write.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.get.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.multiget.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.seek.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.next.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.prev.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.blob.file.write.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.blob.file.read.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.blob.file.sync.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.gc.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.compression.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.blobdb.decompression.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.db.flush.micros P50 : 462211.538462 P95 : 524076.000000 P99 : 524076.000000 P100 : 524076.000000 COUNT : 177 SUM : 78115142
rocksdb.sst.batch.size P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.num.index.and.filter.blocks.read.per.level P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.num.data.blocks.read.per.level P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.num.sst.read.per.level P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.error.handler.autoresume.retry.count P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0

@tsgmq
Copy link
Author

tsgmq commented Jun 1, 2022

-- <test-zone00/test04.zone00.test.net:7788::1> 6/1/22, 9:24:54 AM =============================================================

-- <test-zone00/test04.zone00.test.net:7788::1> -- Gauges ----------------------------------------------------------------------
jraft-fsm-caller-disruptor.buffer-size
value = 16384
jraft-fsm-caller-disruptor.remaining-capacity
value = 16384
jraft-leader-listener
value = true
jraft-log-manager-disruptor.buffer-size
value = 16384
jraft-log-manager-disruptor.remaining-capacity
value = 16384
jraft-node-impl-disruptor.buffer-size
value = 16384
jraft-node-impl-disruptor.remaining-capacity
value = 16384
jraft-node-info-applyIndex
value = 463492469
jraft-node-info-commitIndex
value = 463492469
jraft-node-info-currTerm
value = 96
jraft-node-info-logIndex
value = 463492469
jraft-node-info-snapshotIndex
value = 463492461
jraft-node-info-snapshotLogSize
value = 1947985755
jraft-node-info-snapshotSize
value = 8081913
jraft-node-info-state
value = 0
jraft-read-only-service-disruptor.buffer-size
value = 16384
jraft-read-only-service-disruptor.remaining-capacity
value = 16384
raft-rpc-client-thread-pool.active
value = 0
raft-rpc-client-thread-pool.completed
value = 4
raft-rpc-client-thread-pool.pool-size
value = 4
raft-rpc-client-thread-pool.queued
value = 0
raft-utils-closure-thread-pool.active
value = 0
raft-utils-closure-thread-pool.completed
value = 7348
raft-utils-closure-thread-pool.pool-size
value = 16
raft-utils-closure-thread-pool.queued
value = 0
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.append-entries-times
value = 893
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.heartbeat-times
value = 22278
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.install-snapshot-times
value = 0
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.log-lags
value = 0
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.next-index
value = 463492470
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.probe-times
value = 2
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.append-entries-times
value = 893
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.heartbeat-times
value = 22249
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.install-snapshot-times
value = 0
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.log-lags
value = 0
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.next-index
value = 463492470
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.probe-times
value = 3
replicator-test-zone00/test02.zone00.test.net:7788.append-entries-times
value = 894
replicator-test-zone00/test02.zone00.test.net:7788.heartbeat-times
value = 22300
replicator-test-zone00/test02.zone00.test.net:7788.install-snapshot-times
value = 0
replicator-test-zone00/test02.zone00.test.net:7788.log-lags
value = 0
replicator-test-zone00/test02.zone00.test.net:7788.next-index
value = 463492470
replicator-test-zone00/test02.zone00.test.net:7788.probe-times
value = 3
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.append-entries-times
value = 893
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.heartbeat-times
value = 22263
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.install-snapshot-times
value = 0
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.log-lags
value = 0
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.next-index
value = 463492470
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.probe-times
value = 3

-- <test-zone00/test04.zone00.test.net:7788::1> -- Histograms ------------------------------------------------------------------
append-logs-bytes
count = 5734
min = 0
max = 4112620
mean = 2022995.37
stddev = 392975.43
median = 1644453.00
75% <= 2430963.00
95% <= 2430963.00
98% <= 2430963.00
99% <= 2430963.00
99.9% <= 2430963.00
append-logs-count
count = 5734
min = 1
max = 2
mean = 1.00
stddev = 0.00
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 1.00
fsm-apply-tasks-count
count = 5547
min = 1
max = 2
mean = 1.00
stddev = 0.05
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 2.00
handle-append-entries-count
count = 4827
min = 1
max = 2
mean = 1.00
stddev = 0.00
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 1.00
replicate-entries-bytes
count = 3573
min = 1643994
max = 4795391
mean = 2028222.00
stddev = 406060.79
median = 1644453.00
75% <= 2430963.00
95% <= 2430963.00
98% <= 2430963.00
99% <= 2430963.00
99.9% <= 4075416.00
replicate-entries-count
count = 3573
min = 1
max = 2
mean = 1.00
stddev = 0.05
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 2.00
replicator-test-zone00/test00.zone00.sga.testnet.net:7788.replicate-inflights-count
count = 895
min = 1
max = 2
mean = 1.00
stddev = 0.01
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 1.00
replicator-test-zone00/test01.zone00.sgb.testnet.net:7788.replicate-inflights-count
count = 896
min = 1
max = 2
mean = 1.00
stddev = 0.01
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 1.00
replicator-test-zone00/test02.zone00.test.net:7788.replicate-inflights-count
count = 897
min = 1
max = 2
mean = 1.00
stddev = 0.01
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 1.00
replicator-test-zone00/test03.zone00.sga.testnet.net:7788.replicate-inflights-count
count = 896
min = 1
max = 2
mean = 1.00
stddev = 0.01
median = 1.00
75% <= 1.00
95% <= 1.00
98% <= 1.00
99% <= 1.00
99.9% <= 1.00

-- <test-zone00/test04.zone00.test.net:7788::1> -- Timers ----------------------------------------------------------------------
append-logs
count = 5734
mean rate = 0.05 calls/second
1-minute rate = 0.11 calls/second
5-minute rate = 0.10 calls/second
15-minute rate = 0.12 calls/second
min = 0.00 milliseconds
max = 10.00 milliseconds
mean = 4.82 milliseconds
stddev = 1.03 milliseconds
median = 5.00 milliseconds
75% <= 6.00 milliseconds
95% <= 6.00 milliseconds
98% <= 7.00 milliseconds
99% <= 7.00 milliseconds
99.9% <= 7.00 milliseconds
fsm-apply-tasks
count = 5547
mean rate = 0.05 calls/second
1-minute rate = 0.10 calls/second
5-minute rate = 0.10 calls/second
15-minute rate = 0.11 calls/second
min = 0.00 milliseconds
max = 29.00 milliseconds
mean = 6.77 milliseconds
stddev = 1.32 milliseconds
median = 6.00 milliseconds
75% <= 8.00 milliseconds
95% <= 8.00 milliseconds
98% <= 8.00 milliseconds
99% <= 9.00 milliseconds
99.9% <= 13.00 milliseconds
fsm-commit
count = 5549
mean rate = 0.05 calls/second
1-minute rate = 0.10 calls/second
5-minute rate = 0.10 calls/second
15-minute rate = 0.11 calls/second
min = 0.00 milliseconds
max = 29.00 milliseconds
mean = 6.77 milliseconds
stddev = 1.34 milliseconds
median = 6.00 milliseconds
75% <= 8.00 milliseconds
95% <= 8.00 milliseconds
98% <= 8.00 milliseconds
99% <= 9.00 milliseconds
99.9% <= 14.00 milliseconds
fsm-snapshot-load
count = 1
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 0.00 milliseconds
mean = 0.00 milliseconds
stddev = 0.00 milliseconds
median = 0.00 milliseconds
75% <= 0.00 milliseconds
95% <= 0.00 milliseconds
98% <= 0.00 milliseconds
99% <= 0.00 milliseconds
99.9% <= 0.00 milliseconds
fsm-snapshot-save
count = 129
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 1.00 milliseconds
mean = 0.00 milliseconds
stddev = 0.00 milliseconds
median = 0.00 milliseconds
75% <= 0.00 milliseconds
95% <= 0.00 milliseconds
98% <= 0.00 milliseconds
99% <= 0.00 milliseconds
99.9% <= 0.00 milliseconds
fsm-start-following
count = 2
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 0.00 milliseconds
mean = 0.00 milliseconds
stddev = 0.00 milliseconds
median = 0.00 milliseconds
75% <= 0.00 milliseconds
95% <= 0.00 milliseconds
98% <= 0.00 milliseconds
99% <= 0.00 milliseconds
99.9% <= 0.00 milliseconds
fsm-stop-following
count = 2
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 0.00 milliseconds
mean = 0.00 milliseconds
stddev = 0.00 milliseconds
median = 0.00 milliseconds
75% <= 0.00 milliseconds
95% <= 0.00 milliseconds
98% <= 0.00 milliseconds
99% <= 0.00 milliseconds
99.9% <= 0.00 milliseconds
handle-append-entries
count = 4837
mean rate = 0.05 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 4.00 milliseconds
mean = 1.87 milliseconds
stddev = 0.49 milliseconds
median = 2.00 milliseconds
75% <= 2.00 milliseconds
95% <= 3.00 milliseconds
98% <= 3.00 milliseconds
99% <= 3.00 milliseconds
99.9% <= 3.00 milliseconds
handle-heartbeat-requests
count = 333430
mean rate = 3.11 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 2.00 milliseconds
mean = 1.16 milliseconds
stddev = 0.45 milliseconds
median = 1.00 milliseconds
75% <= 1.00 milliseconds
95% <= 2.00 milliseconds
98% <= 2.00 milliseconds
99% <= 2.00 milliseconds
99.9% <= 2.00 milliseconds
replicate-entries
count = 3573
mean rate = 0.53 calls/second
1-minute rate = 0.37 calls/second
5-minute rate = 0.39 calls/second
15-minute rate = 0.46 calls/second
min = 7.00 milliseconds
max = 35.00 milliseconds
mean = 13.38 milliseconds
stddev = 2.92 milliseconds
median = 13.00 milliseconds
75% <= 16.00 milliseconds
95% <= 17.00 milliseconds
98% <= 18.00 milliseconds
99% <= 18.00 milliseconds
99.9% <= 26.00 milliseconds
request-vote
count = 4
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 1.00 milliseconds
max = 11.00 milliseconds
mean = 4.25 milliseconds
stddev = 3.96 milliseconds
median = 3.00 milliseconds
75% <= 11.00 milliseconds
95% <= 11.00 milliseconds
98% <= 11.00 milliseconds
99% <= 11.00 milliseconds
99.9% <= 11.00 milliseconds
save-raft-meta
count = 3
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 0.00 milliseconds
mean = 0.00 milliseconds
stddev = 0.00 milliseconds
median = 0.00 milliseconds
75% <= 0.00 milliseconds
95% <= 0.00 milliseconds
98% <= 0.00 milliseconds
99% <= 0.00 milliseconds
99.9% <= 0.00 milliseconds
truncate-log-prefix
count = 179
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 0.00 milliseconds
max = 1.00 milliseconds
mean = 0.00 milliseconds
stddev = 0.00 milliseconds
median = 0.00 milliseconds
75% <= 0.00 milliseconds
95% <= 0.00 milliseconds
98% <= 0.00 milliseconds
99% <= 0.00 milliseconds
99.9% <= 0.00 milliseconds

@tsgmq
Copy link
Author

tsgmq commented Jun 1, 2022

6/1/22, 9:24:54 AM =============================================================

-- Timers ----------------------------------------------------------------------
scheduledThreadPool.JRaft-Node-ScheduleThreadPool
count = 89097
mean rate = 13.20 calls/second
1-minute rate = 13.20 calls/second
5-minute rate = 13.20 calls/second
15-minute rate = 13.20 calls/second
min = 0.01 milliseconds
max = 0.04 milliseconds
mean = 0.01 milliseconds
stddev = 0.00 milliseconds
median = 0.01 milliseconds
75% <= 0.01 milliseconds
95% <= 0.01 milliseconds
98% <= 0.02 milliseconds
99% <= 0.02 milliseconds
99.9% <= 0.03 milliseconds
threadPool.JRAFT_CLOSURE_EXECUTOR
count = 7348
mean rate = 0.07 calls/second
1-minute rate = 0.86 calls/second
5-minute rate = 0.90 calls/second
15-minute rate = 1.03 calls/second
min = 0.08 milliseconds
max = 5.98 milliseconds
mean = 2.54 milliseconds
stddev = 0.68 milliseconds
median = 2.47 milliseconds
75% <= 2.53 milliseconds
95% <= 4.02 milliseconds
98% <= 4.14 milliseconds
99% <= 4.14 milliseconds
99.9% <= 5.94 milliseconds
threadPool.JRAFT_RPC_CLOSURE_EXECUTOR
count = 89097
mean rate = 13.20 calls/second
1-minute rate = 13.20 calls/second
5-minute rate = 13.20 calls/second
15-minute rate = 13.20 calls/second
min = 0.02 milliseconds
max = 2.18 milliseconds
mean = 1.09 milliseconds
stddev = 0.22 milliseconds
median = 1.05 milliseconds
75% <= 1.09 milliseconds
95% <= 1.54 milliseconds
98% <= 1.57 milliseconds
99% <= 1.58 milliseconds
99.9% <= 1.63 milliseconds
threadPool.JRaft-RPC-Processor
count = 4
mean rate = 0.00 calls/second
1-minute rate = 0.00 calls/second
5-minute rate = 0.00 calls/second
15-minute rate = 0.00 calls/second
min = 16.50 milliseconds
max = 55.62 milliseconds
mean = 40.00 milliseconds
stddev = 14.61 milliseconds
median = 47.59 milliseconds
75% <= 55.62 milliseconds
95% <= 55.62 milliseconds
98% <= 55.62 milliseconds
99% <= 55.62 milliseconds
99.9% <= 55.62 milliseconds

killme2008 added a commit that referenced this issue Jun 18, 2022
killme2008 added a commit that referenced this issue Jun 18, 2022
killme2008 added a commit that referenced this issue Jun 18, 2022
fengjiachun pushed a commit that referenced this issue Jun 20, 2022
* fix: reset waitId unexpectedly when replicator blocks on network issue and improve log in replciator #842, #838

* test: fix testMetricRemoveOnDestroy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant