Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inforschema, executor, util/kvcache, util/statement_summary : Add STATEMENTS_SUMMARY_EVICTED into information_schema #24513

Merged
merged 69 commits into from
May 28, 2021
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
1a96cbb
util/kvcache: enhance LRU Cache
ClSlaid Apr 23, 2021
0c8920a
Merge branch 'master' into master
ClSlaid Apr 23, 2021
25ae806
Merge branch 'master' into master
ClSlaid Apr 23, 2021
529e346
Util/kvcache: Typo fix
ClSlaid Apr 23, 2021
38fa019
util/kvcache/simple_lru.go: typo fix
ClSlaid Apr 25, 2021
d78ca46
util/kvcache: fix test
ClSlaid Apr 25, 2021
1cddd8f
Merge branch 'master' into master
crazycs520 Apr 25, 2021
33d865f
util/kvcache: synced onEvict func -> asynced
ClSlaid Apr 25, 2021
2d75e37
Merge branch 'master' of github.com:ClSlaid/tidb
ClSlaid Apr 25, 2021
05703d8
util/kvcache: formatted
ClSlaid Apr 25, 2021
eaf1caa
Merge branch 'master' into master
crazycs520 Apr 25, 2021
0163613
Merge branch 'master' into master
ti-chi-bot Apr 25, 2021
0935f75
Merge branch 'master' into master
ti-chi-bot Apr 25, 2021
c0227b6
Statements Summary Evicted Prototype Commit
ClSlaid Apr 30, 2021
0a168fe
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid Apr 30, 2021
b447b86
Fix STATEMENTS_SUMMARY_EVICTED
ClSlaid May 8, 2021
8e8792b
util/stmtsummary: Fix evicted.go
ClSlaid May 10, 2021
2a8ed11
Merge branch 'pingcap:master' into master
ClSlaid May 10, 2021
12b6a87
Merge branch 'master' into master
ClSlaid May 10, 2021
373d344
util/stmtsummary: Delete debug code
ClSlaid May 10, 2021
0f24624
util/stmtsummary: add test to EVICTED_COUNT
ClSlaid May 11, 2021
e2a18b1
util/stmtsummary: Add test to evicted count
ClSlaid May 12, 2021
6b7da7e
go.sum: disable fail-point
ClSlaid May 12, 2021
ebad974
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 12, 2021
3fcd196
fix merge problems
ClSlaid May 12, 2021
073ff06
clean up evicted count
ClSlaid May 14, 2021
32e8df9
typo fix
ClSlaid May 14, 2021
1092e72
performance improve and typo fix
ClSlaid May 14, 2021
7e8a8d8
simplify logic in addEvicted
ClSlaid May 17, 2021
252b43a
beautify evicted.go && typo fix
ClSlaid May 17, 2021
015b145
fix nil pointer bug in evicted.go
ClSlaid May 19, 2021
e061027
fix zero quota test for kvcache
ClSlaid May 20, 2021
d3bf818
Add test to evicted.go and some bug fixes
ClSlaid May 20, 2021
f59155b
typo fix in executor and infoschema
ClSlaid May 20, 2021
efcad35
typo fix
ClSlaid May 21, 2021
80167a3
try fix git merge problem
ClSlaid May 21, 2021
dbc94c7
Add more test to evicted count
ClSlaid May 21, 2021
756c53e
evicted test full cover
ClSlaid May 24, 2021
43a4adc
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 24, 2021
be4b4e4
fix merge conflict
ClSlaid May 24, 2021
c6ce248
fix nil pointer problem
ClSlaid May 24, 2021
a8be817
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 24, 2021
fdec81a
test refactoring
ClSlaid May 24, 2021
01a92a4
format and add license.
ClSlaid May 24, 2021
9c503e5
fix empty table error and clean up useless codes.
ClSlaid May 25, 2021
b2278b2
Add test to table and more test to evicted count
ClSlaid May 25, 2021
d3ea973
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 25, 2021
a3ed839
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 26, 2021
9b243a5
typo fix
ClSlaid May 26, 2021
eaffa1c
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 26, 2021
413ed0d
make check
ClSlaid May 26, 2021
c91e5e3
OUTDATED AGAIN???: Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 26, 2021
0bd1f5e
fix go.sum
ClSlaid May 26, 2021
f282833
try fix data racing
ClSlaid May 26, 2021
16c83f9
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 26, 2021
cca919b
try fix data racing.
ClSlaid May 26, 2021
52cb27b
try fix data racing again.
ClSlaid May 26, 2021
a31059d
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 26, 2021
4c9e65f
Merge branch 'master' into master
crazycs520 May 27, 2021
b7a87e1
try fix data racing again again.
ClSlaid May 27, 2021
5725827
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 27, 2021
b0776f5
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 27, 2021
b50b213
revoke change in unrelated files.
ClSlaid May 27, 2021
3b9fa88
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 27, 2021
cafa16a
Merge branch 'master' into master
crazycs520 May 28, 2021
f880324
change interval in tables_test.go
ClSlaid May 28, 2021
f740b34
Merge branch 'master' of github.com:pingcap/tidb
ClSlaid May 28, 2021
6c69920
Merge branch 'master' into master
ti-chi-bot May 28, 2021
caab2da
Merge branch 'master' into master
ti-chi-bot May 28, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions executor/builder.go
Original file line number Diff line number Diff line change
Expand Up @@ -1526,6 +1526,7 @@ func (b *executorBuilder) buildMemTable(v *plannercore.PhysicalMemTable) Executo
strings.ToLower(infoschema.TableTiKVStoreStatus),
strings.ToLower(infoschema.TableStatementsSummary),
strings.ToLower(infoschema.TableStatementsSummaryHistory),
strings.ToLower(infoschema.TableStatementsSummaryEvicted),
strings.ToLower(infoschema.ClusterTableStatementsSummary),
strings.ToLower(infoschema.ClusterTableStatementsSummaryHistory),
strings.ToLower(infoschema.TablePlacementPolicy),
Expand Down
10 changes: 10 additions & 0 deletions executor/infoschema_reader.go
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,8 @@ func (e *memtableRetriever) retrieve(ctx context.Context, sctx sessionctx.Contex
infoschema.TableClientErrorsSummaryByUser,
infoschema.TableClientErrorsSummaryByHost:
err = e.setDataForClientErrorsSummary(sctx, e.table.Name.O)
case infoschema.TableStatementsSummaryEvicted:
err = e.setDataForStmtSummaryEvicted(sctx, e.table.Name.O)
}
if err != nil {
return nil, err
Expand Down Expand Up @@ -2011,6 +2013,14 @@ func (e *memtableRetriever) setDataForClientErrorsSummary(ctx sessionctx.Context
return nil
}

func (e *memtableRetriever) setDataForStmtSummaryEvicted(ctx sessionctx.Context, tableName string) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look like this function is useless? just use e.rows = stmtsummary.StmtSummaryByDigestMap.ToEvictedCountDatum() in line#153?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, I wasn't aware of it.

switch tableName {
case infoschema.TableStatementsSummaryEvicted:
e.rows = stmtsummary.StmtSummaryByDigestMap.ToEvictedCountDatum()
}
return nil
jyz0309 marked this conversation as resolved.
Show resolved Hide resolved
}

type hugeMemTableRetriever struct {
dummyCloser
table *model.TableInfo
Expand Down
10 changes: 10 additions & 0 deletions infoschema/tables.go
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,8 @@ const (
TableClientErrorsSummaryByUser = "CLIENT_ERRORS_SUMMARY_BY_USER"
// TableClientErrorsSummaryByHost is the string constant of client errors table.
TableClientErrorsSummaryByHost = "CLIENT_ERRORS_SUMMARY_BY_HOST"
// TableStatementsSummaryEvicted is the string constant of statements summary evicted table.
TableStatementsSummaryEvicted = "STATEMENTS_SUMMARY_EVICTED"
jyz0309 marked this conversation as resolved.
Show resolved Hide resolved
)

var tableIDMap = map[string]int64{
Expand Down Expand Up @@ -233,6 +235,7 @@ var tableIDMap = map[string]int64{
TableClientErrorsSummaryGlobal: autoid.InformationSchemaDBID + 67,
TableClientErrorsSummaryByUser: autoid.InformationSchemaDBID + 68,
TableClientErrorsSummaryByHost: autoid.InformationSchemaDBID + 69,
TableStatementsSummaryEvicted: autoid.InformationSchemaDBID + 70,
}

type columnInfo struct {
Expand Down Expand Up @@ -1332,6 +1335,12 @@ var tableClientErrorsSummaryByHostCols = []columnInfo{
{name: "LAST_SEEN", tp: mysql.TypeTimestamp, size: 26},
}

var tableStatementsSummaryEvictedCols = []columnInfo{
{name: "BEGIN_TIME", tp: mysql.TypeTimestamp, size: 26},
{name: "END_TIME", tp: mysql.TypeTimestamp, size: 26},
{name: "EVICTED_COUNT", tp: mysql.TypeLonglong, size: 64, flag: mysql.NotNullFlag},
}

// GetShardingInfo returns a nil or description string for the sharding information of given TableInfo.
// The returned description string may be:
// - "NOT_SHARDED": for tables that SHARD_ROW_ID_BITS is not specified.
Expand Down Expand Up @@ -1701,6 +1710,7 @@ var tableNameToColumns = map[string][]columnInfo{
TableClientErrorsSummaryGlobal: tableClientErrorsSummaryGlobalCols,
TableClientErrorsSummaryByUser: tableClientErrorsSummaryByUserCols,
TableClientErrorsSummaryByHost: tableClientErrorsSummaryByHostCols,
TableStatementsSummaryEvicted: tableStatementsSummaryEvictedCols,
jyz0309 marked this conversation as resolved.
Show resolved Hide resolved
}

func createInfoSchemaTable(_ autoid.Allocators, meta *model.TableInfo) (table.Table, error) {
Expand Down
3 changes: 3 additions & 0 deletions util/kvcache/simple_lru.go
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,9 @@ func (l *SimpleLRUCache) Put(key Key, value Value) {
if l.size > l.capacity {
lru := l.cache.Back()
l.cache.Remove(lru)
if l.onEvict != nil {
crazycs520 marked this conversation as resolved.
Show resolved Hide resolved
l.onEvict(lru.Value.(*cacheEntry).key, lru.Value.(*cacheEntry).value)
}
delete(l.elements, string(lru.Value.(*cacheEntry).key.Hash()))
l.size--
}
Expand Down
319 changes: 319 additions & 0 deletions util/stmtsummary/evicted.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,319 @@
package stmtsummary
djshow832 marked this conversation as resolved.
Show resolved Hide resolved

import (
"container/list"
"time"

"github.com/pingcap/parser/mysql"
"github.com/pingcap/tidb/types"
)

// stmtSummaryByDigestEvicted contents digests evicted from stmtSummaryByDigestMap
type stmtSummaryByDigestEvicted struct {
// record evicted data in intervals
// latest history data is Back()
history *list.List
}

// element being stored in stmtSummaryByDigestEvicted
type stmtSummaryByDigestEvictedElement struct {
// *Kinds* of digest being evicted
digestKeyMap map[string]struct{}

// summary of digest being evicted
sum *stmtSummaryByDigestElement
}

// spawn a new pointer to stmtSummaryByDigestEvicted
func newStmtSummaryByDigestEvicted() *stmtSummaryByDigestEvicted {
return &stmtSummaryByDigestEvicted{
history: list.New(),
}
}

// spawn a new pointer to stmtSummaryByDigestEvictedElement
func newStmtSummaryByDigestEvictedElement(beginTimeForCurrentInterval int64, intervalSeconds int64) *stmtSummaryByDigestEvictedElement {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use intervalSeconds as a parameter? how about simply use beginTime , endTime as the paramaters?

ssElement := new(stmtSummaryByDigestElement)
ssElement.beginTime = beginTimeForCurrentInterval
ssElement.endTime = beginTimeForCurrentInterval + intervalSeconds
return &stmtSummaryByDigestEvictedElement{
digestKeyMap: make(map[string]struct{}),
sum: ssElement,
}
}

// AddEvicted is used add an evicted record to stmtSummaryByDigestEvicted
func (ssbde *stmtSummaryByDigestEvicted) AddEvicted(evictedKey *stmtSummaryByDigestKey, evictedValue *stmtSummaryByDigest, historySize int) {
crazycs520 marked this conversation as resolved.
Show resolved Hide resolved

// *need to get optimized*!!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the blank line and refine the comment. Need to optimize for what?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently, I simply check through all elements in evicted digest and history list.

Assume the length of evicted digest is 'm', length of history list is 'n'. then time consumed on this operation is O(mn).

This can be improved to O(m+n).

evictedValue.Lock()
defer evictedValue.Unlock()
for e := evictedValue.history.Back(); e != nil; e = e.Prev() {
eBeginTime := e.Value.(*stmtSummaryByDigestElement).beginTime
eEndTime := e.Value.(*stmtSummaryByDigestElement).endTime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
eBeginTime := e.Value.(*stmtSummaryByDigestElement).beginTime
eEndTime := e.Value.(*stmtSummaryByDigestElement).endTime
element := e.Value.(*stmtSummaryByDigestElement)
eBeginTime := element.beginTime
eEndTime := element.endTime


// prevent exceeding history size
for ssbde.history.Len() >= historySize && ssbde.history.Len() > 1 {
ssbde.history.Remove(ssbde.history.Front())
}

// look for match history interval
// no record in history
if ssbde.history.Len() == 0 && historySize > 0 {
beginTime := eBeginTime
intervalSeconds := eEndTime - eBeginTime
record := newStmtSummaryByDigestEvictedElement(beginTime, intervalSeconds)
record.addEvicted(evictedKey, e.Value.(*stmtSummaryByDigestElement))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

ssbde.history.PushBack(record)
continue
}

for h := ssbde.history.Back(); h != nil; h = h.Prev() {
sBeginTime := h.Value.(*stmtSummaryByDigestEvictedElement).sum.beginTime
sEndTime := h.Value.(*stmtSummaryByDigestEvictedElement).sum.endTime

if sBeginTime <= eBeginTime &&
sEndTime >= eEndTime {
// is in this history interval
h.Value.(*stmtSummaryByDigestEvictedElement).addEvicted(evictedKey, e.Value.(*stmtSummaryByDigestElement))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

break
}

if sEndTime <= eBeginTime {
// digest is young, insert into new interval after this history interval
beginTime := eBeginTime
intervalSeconds := eEndTime - eBeginTime
record := newStmtSummaryByDigestEvictedElement(beginTime, intervalSeconds)
record.addEvicted(evictedKey, e.Value.(*stmtSummaryByDigestElement))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

ssbde.history.InsertAfter(record, h)
break
}

if sBeginTime > eEndTime {
// digestElement is old
if h != ssbde.history.Front() {
// check older history digestEvictedElement
continue
} else if ssbde.history.Len() >= historySize {
// out of history size, abandon
break
} else {
// is oldest digest
// creat a digestEvictedElement and PushFront!
beginTime := eBeginTime
intervalSeconds := eEndTime - eBeginTime
record := newStmtSummaryByDigestEvictedElement(beginTime, intervalSeconds)
record.addEvicted(evictedKey, e.Value.(*stmtSummaryByDigestElement))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

ssbde.history.PushFront(record)
break
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does any test cover this? Because I think this situation should never happen.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently no.

}
}
}

// Clear up all records in stmtSummaryByDigestEvicted
func (ssbde *stmtSummaryByDigestEvicted) Clear() {
ssbde.history.Init()
}

// add an evicted record to stmtSummaryByDigestEvictedElement
func (seElement *stmtSummaryByDigestEvictedElement) addEvicted(digestKey *stmtSummaryByDigestKey, digestValue *stmtSummaryByDigestElement) {
if digestKey != nil {
seElement.digestKeyMap[string(digestKey.Hash())] = struct{}{}
}
sumEvicted(seElement.sum, digestValue)
}

// ToCurrentOtherDatum converts current evicted record to `other` record's datum
func (ssbde *stmtSummaryByDigestEvicted) ToCurrentOtherDatum() []types.Datum {
induceSsbd := new(stmtSummaryByDigest)
induceSsbd.stmtType = ""
induceSsbd.schemaName = ""
induceSsbd.digest = ""
induceSsbd.normalizedSQL = ""
induceSsbd.planDigest = ""
return ssbde.history.Back().Value.(*stmtSummaryByDigestEvictedElement).toOtherDatum(induceSsbd)
}

// ToHistoryOtherDatum converts history evicted record to `other` record's datum
func (ssbde *stmtSummaryByDigestEvicted) ToHistoryOtherDatum() [][]types.Datum {
induceSsbd := new(stmtSummaryByDigest)

var records [][]types.Datum
for e := ssbde.history.Front(); e != nil; e = e.Next() {
if record := e.Value.(*stmtSummaryByDigestEvictedElement).toOtherDatum(induceSsbd); record != nil {
records = append(records, record)
}
}
return records
}

// ToEvictedCountDatum converts history evicted record to `evicted count` record's datum
func (ssbde *stmtSummaryByDigestEvicted) ToEvictedCountDatum() [][]types.Datum {
records := make([][]types.Datum, 0, ssbde.history.Len())
for e := ssbde.history.Front(); e != nil; e = e.Next() {
if record := e.Value.(*stmtSummaryByDigestEvictedElement).toEvictedCountDatum(); record != nil {
records = append(records, record)
}
}
return records
}

// toOtherDatum converts evicted record to `other` record's datum
func (seElement *stmtSummaryByDigestEvictedElement) toOtherDatum(ssbd *stmtSummaryByDigest) []types.Datum {
return seElement.sum.toDatum(ssbd)
}

// toEvictedCountDatum converts evicted record to `EvictedCount` record's datum
func (seElement *stmtSummaryByDigestEvictedElement) toEvictedCountDatum() []types.Datum {
datum := types.MakeDatums(
types.NewTime(types.FromGoTime(time.Unix(seElement.sum.beginTime, 0)), mysql.TypeTimestamp, 0),
types.NewTime(types.FromGoTime(time.Unix(seElement.sum.endTime, 0)), mysql.TypeTimestamp, 0),
int64(len(seElement.digestKeyMap)),
)
return datum
}

func (ssMap *stmtSummaryByDigestMap) ToEvictedCountDatum() [][]types.Datum {
return ssMap.other.ToEvictedCountDatum()
}

// sumEvicted sum addWith into addTo
func sumEvicted(sumTo *stmtSummaryByDigestElement, addWith *stmtSummaryByDigestElement) {
// Time duration relation: addWith ⊆ sumTo
if sumTo.beginTime < addWith.beginTime {
sumTo.beginTime = addWith.beginTime
}
if sumTo.endTime > addWith.beginTime {
sumTo.endTime = addWith.endTime
}
// basic
sumTo.execCount += addWith.execCount
sumTo.sumErrors += addWith.sumErrors
sumTo.sumWarnings += addWith.sumWarnings
// latency
sumTo.sumLatency += addWith.sumLatency
sumTo.sumParseLatency += addWith.sumParseLatency
sumTo.sumCompileLatency += addWith.sumCompileLatency
if sumTo.maxLatency < addWith.maxLatency {
sumTo.maxLatency = addWith.maxLatency
}
if sumTo.maxCompileLatency < addWith.maxCompileLatency {
sumTo.maxCompileLatency = addWith.maxCompileLatency
}
if sumTo.minLatency > addWith.minLatency {
sumTo.minLatency = addWith.minLatency
}
// coprocessor
sumTo.sumNumCopTasks += addWith.sumNumCopTasks
if sumTo.maxCopProcessTime < addWith.maxCopProcessTime {
sumTo.maxCopProcessTime = addWith.maxCopProcessTime
}
if sumTo.maxCopWaitTime < addWith.maxCopWaitTime {
sumTo.maxCopWaitTime = addWith.maxCopWaitTime
}
// TiKV
sumTo.sumProcessTime += addWith.sumProcessTime
sumTo.sumWaitTime += addWith.sumWaitTime
sumTo.sumBackoffTime += addWith.sumBackoffTime
sumTo.sumTotalKeys += addWith.sumTotalKeys
sumTo.sumProcessedKeys += addWith.sumProcessedKeys
sumTo.sumRocksdbDeleteSkippedCount += addWith.sumRocksdbDeleteSkippedCount
sumTo.sumRocksdbKeySkippedCount += addWith.sumRocksdbKeySkippedCount
sumTo.sumRocksdbBlockCacheHitCount += addWith.sumRocksdbBlockCacheHitCount
sumTo.sumRocksdbBlockReadCount += addWith.sumRocksdbBlockReadCount
sumTo.sumRocksdbBlockReadByte += addWith.sumRocksdbBlockReadByte
if sumTo.maxProcessTime < addWith.maxProcessTime {
sumTo.maxProcessTime = addWith.maxProcessTime
}
if sumTo.maxWaitTime < addWith.maxWaitTime {
sumTo.maxWaitTime = addWith.maxWaitTime
}
if sumTo.maxBackoffTime < addWith.maxBackoffTime {
sumTo.maxBackoffTime = addWith.maxBackoffTime
}
if sumTo.maxTotalKeys < addWith.maxTotalKeys {
sumTo.maxTotalKeys = addWith.maxTotalKeys
}
if sumTo.maxProcessedKeys < addWith.maxProcessedKeys {
sumTo.maxProcessedKeys = addWith.maxProcessedKeys
}
if sumTo.maxRocksdbBlockReadByte < addWith.maxRocksdbBlockReadByte {
sumTo.maxRocksdbBlockReadByte = addWith.maxRocksdbBlockReadByte
}
if sumTo.maxRocksdbBlockCacheHitCount < addWith.maxRocksdbBlockCacheHitCount {
sumTo.maxRocksdbBlockCacheHitCount = addWith.maxRocksdbBlockCacheHitCount
}
if sumTo.maxRocksdbBlockReadCount < addWith.maxRocksdbBlockReadCount {
sumTo.maxRocksdbBlockReadCount = addWith.maxRocksdbBlockReadCount
}
if sumTo.maxRocksdbDeleteSkippedCount < addWith.maxRocksdbDeleteSkippedCount {
sumTo.maxRocksdbDeleteSkippedCount = addWith.maxRocksdbDeleteSkippedCount
}
if sumTo.maxRocksdbKeySkippedCount < addWith.maxRocksdbKeySkippedCount {
sumTo.maxRocksdbKeySkippedCount = addWith.maxRocksdbKeySkippedCount
}
// txn
sumTo.commitCount += addWith.commitCount
sumTo.sumGetCommitTsTime += addWith.sumGetCommitTsTime
sumTo.sumPrewriteTime += addWith.sumPrewriteTime
sumTo.sumCommitTime += addWith.sumCommitTime
sumTo.sumLocalLatchTime += addWith.sumLocalLatchTime
sumTo.sumCommitBackoffTime += addWith.sumCommitBackoffTime
sumTo.sumResolveLockTime += addWith.sumResolveLockTime
sumTo.sumWriteKeys += addWith.sumWriteKeys
sumTo.sumWriteSize += addWith.sumWriteSize
sumTo.sumPrewriteRegionNum += addWith.sumPrewriteRegionNum
sumTo.sumTxnRetry += addWith.sumTxnRetry
sumTo.sumBackoffTimes += sumTo.sumBackoffTimes
if sumTo.maxGetCommitTsTime < addWith.maxGetCommitTsTime {
sumTo.maxGetCommitTsTime = addWith.maxGetCommitTsTime
}
if sumTo.maxPrewriteTime < addWith.maxPrewriteTime {
sumTo.maxPrewriteTime = addWith.maxPrewriteTime
}
if sumTo.maxCommitTime < addWith.maxCommitTime {
sumTo.maxCommitTime = addWith.maxCommitTime
}
if sumTo.maxLocalLatchTime < addWith.maxLocalLatchTime {
sumTo.maxLocalLatchTime = addWith.maxLocalLatchTime
}
if sumTo.maxCommitBackoffTime < addWith.maxCommitBackoffTime {
sumTo.maxCommitBackoffTime = addWith.maxCommitBackoffTime
}
if sumTo.maxResolveLockTime < addWith.maxResolveLockTime {
sumTo.maxResolveLockTime = addWith.maxResolveLockTime
}
if sumTo.maxWriteKeys < addWith.maxWriteKeys {
sumTo.maxWriteKeys = addWith.maxWriteKeys
}
if sumTo.maxWriteSize < addWith.maxWriteSize {
sumTo.maxWriteSize = addWith.maxWriteSize
}
if sumTo.maxPrewriteRegionNum < sumTo.maxPrewriteRegionNum {
sumTo.maxPrewriteRegionNum = addWith.maxPrewriteRegionNum
}
if sumTo.maxTxnRetry < addWith.maxTxnRetry {
sumTo.maxTxnRetry = addWith.maxTxnRetry
}
// other
sumTo.sumMem += addWith.sumMem
sumTo.sumDisk += addWith.sumDisk
sumTo.sumAffectedRows += addWith.sumAffectedRows
sumTo.sumKVTotal += addWith.sumKVTotal
sumTo.sumPDTotal += addWith.sumPDTotal
sumTo.sumBackoffTotal += addWith.sumBackoffTotal
sumTo.sumWriteSQLRespTotal += addWith.sumWriteSQLRespTotal
if sumTo.maxMem < addWith.maxMem {
sumTo.maxMem = addWith.maxMem
}
if sumTo.maxDisk < addWith.maxDisk {
sumTo.maxDisk = addWith.maxDisk
}
// plan cache
sumTo.planCacheHits += addWith.planCacheHits
// pessimistic execution retry information
sumTo.execRetryCount += addWith.execRetryCount
sumTo.execRetryTime += addWith.execRetryTime
}
Loading