Skip to content

Commit

Permalink
eventpb: add storage event types; log periodically per-store
Browse files Browse the repository at this point in the history
Add the `StoreStats` event type, a per-store event emitted to the
`TELEMETRY` logging channel. This event type will be computed from the
Pebble metrics for each store.

Emit a `StoreStats` event periodically, by default, once per hour, per
store.

Touches #85589.

Release note: None.

Release justification: low risk, high benefit changes to existing
functionality.
  • Loading branch information
nicktrav committed Aug 24, 2022
1 parent bdadde2 commit c74c215
Show file tree
Hide file tree
Showing 13 changed files with 803 additions and 0 deletions.
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -1629,6 +1629,7 @@ EVENTPB_PROTOS = \
pkg/util/log/eventpb/cluster_events.proto \
pkg/util/log/eventpb/job_events.proto \
pkg/util/log/eventpb/health_events.proto \
pkg/util/log/eventpb/storage_events.proto \
pkg/util/log/eventpb/telemetry.proto

EVENTLOG_PROTOS = pkg/util/log/logpb/event.proto $(EVENTPB_PROTOS)
Expand Down
83 changes: 83 additions & 0 deletions docs/generated/eventlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -2438,6 +2438,89 @@ are automatically converted server-side.
| `NewMethod` | The new hash method. | no |


#### Common fields

| Field | Description | Sensitive |
|--|--|--|
| `Timestamp` | The timestamp of the event. Expressed as nanoseconds since the Unix epoch. | no |
| `EventType` | The type of the event. | no |

## Storage telemetry events



Events in this category are logged to the `TELEMETRY` channel.


### `level_stats`

An event of type `level_stats` contains per-level statistics for an LSM.


| Field | Description | Sensitive |
|--|--|--|
| `Level` | level is the level ID in a LSM (e.g. level(L0) == 0, etc.) | no |
| `NumFiles` | num_files is the number of files in the level (gauge). | no |
| `SizeBytes` | size_bytes is the size of the level, in bytes (gauge). | no |
| `Score` | score is the compaction score of the level (gauge). | no |
| `BytesIn` | bytes_in is the number of bytes written to this level (counter). | no |
| `BytesIngested` | bytes_ingested is the number of bytes ingested into this level (counter). | no |
| `BytesMoved` | bytes_moved is the number of bytes moved into this level via a move-compaction (counter). | no |
| `BytesRead` | bytes_read is the number of bytes read from this level, during compactions (counter). | no |
| `BytesCompacted` | bytes_compacted is the number of bytes written to this level during compactions (counter). | no |
| `BytesFlushed` | bytes flushed is the number of bytes flushed to this level. This value is always zero for levels other than L0 (counter). | no |
| `TablesCompacted` | tables_compacted is the count of tables compacted into this level (counter). | no |
| `TablesFlushed` | tables_flushed is the count of tables flushed into this level (counter). | no |
| `TablesIngested` | tables_ingested is the count of tables ingested into this level (counter). | no |
| `TablesMoved` | tables_moved is the count of tables moved into this level via move-compactions (counter). | no |
| `NumSublevels` | num_sublevel is the count of sublevels for the level. This value is always zero for levels other than L0 (gauge). | no |



### `store_stats`

An event of type `store_stats` contains per store stats.

Note that because stats are scoped to the lifetime of the process, counters
(and certain gauges) will be reset across node restarts.


| Field | Description | Sensitive |
|--|--|--|
| `NodeId` | node_id is the ID of the node. | no |
| `StoreId` | store_id is the ID of the store. | no |
| `Levels` | levels is a nested message containing per-level statistics. | yes |
| `CacheSize` | cache_size is the size of the cache for the store, in bytes (gauge). | no |
| `CacheCount` | cache_count is the number of items in the cache (gauge). | no |
| `CacheHits` | cache_hits is the number of cache hits (counter). | no |
| `CacheMisses` | cache_misses is the number of cache misses (counter). | no |
| `CompactionCountDefault` | compaction_count_default is the count of default compactions (counter). | no |
| `CompactionCountDeleteOnly` | compaction_count_delete_only is the count of delete-only compactions (counter). | no |
| `CompactionCountElisionOnly` | compaction_count_elision_only is the count of elision-only compactions (counter). | no |
| `CompactionCountMove` | compaction_count_move is the count of move-compactions (counter). | no |
| `CompactionCountRead` | compaction_count_read is the count of read-compactions (counter). | no |
| `CompactionCountRewrite` | compaction_count_rewrite is the count of rewrite-compactions (counter). | no |
| `CompactionNumInProgress` | compactions_num_in_progress is the number of compactions in progress (gauge). | no |
| `CompactionMarkedFiles` | compaction_marked_files is the count of files marked for compaction (gauge). | no |
| `FlushCount` | flush_count is the number of flushes (counter). | no |
| `MemtableSize` | memtable_size is the total size allocated to all memtables and (large) batches, in bytes (gauge). | no |
| `MemtableCount` | memtable_count is the count of memtables (gauge). | no |
| `MemtableZombieCount` | memtable_zombie_count is the count of memtables no longer referenced by the current DB state, but still in use by an iterator (gauge). | no |
| `MemtableZombieSize` | memtable_zombie_size is the size, in bytes, of all zombie memtables (gauge). | no |
| `WalLiveCount` | wal_live_count is the count of live WAL files (gauge). | no |
| `WalLiveSize` | wal_live_size is the size, in bytes, of live data in WAL files. With WAL recycling, this value is less than the actual on-disk size of the WAL files (gauge). | no |
| `WalObsoleteCount` | wal_obsolete_count is the count of obsolete WAL files (gauge). | no |
| `WalObsoleteSize` | wal_obsolete_size is the size of obsolete WAL files, in bytes (gauge). | no |
| `WalPhysicalSize` | wal_physical_size is the size, in bytes, of the WAL files on disk (gauge). | no |
| `WalBytesIn` | wal_bytes_in is the number of logical bytes written to the WAL (counter). | no |
| `WalBytesWritten` | wal_bytes_written is the number of bytes written to the WAL (counter). | no |
| `TableObsoleteCount` | table_obsolete_count is the number of tables which are no longer referenced by the current DB state or any open iterators (gauge). | no |
| `TableObsoleteSize` | table_obsolete_size is the size, in bytes, of obsolete tables (gauge). | no |
| `TableZombieCount` | table_zombie_count is the number of tables no longer referenced by the current DB state, but are still in use by an open iterator (gauge). | no |
| `TableZombieSize` | table_zombie_size is the size, in bytes, of zombie tables (gauge). | no |
| `RangeKeySetsCount` | range_key_sets_count is the approximate count of internal range key sets in the store. | no |


#### Common fields

| Field | Description | Sensitive |
Expand Down
1 change: 1 addition & 0 deletions pkg/kv/kvserver/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,7 @@ go_library(
"//pkg/util/iterutil",
"//pkg/util/limit",
"//pkg/util/log",
"//pkg/util/log/logcrash",
"//pkg/util/metric",
"//pkg/util/metric/aggmetric",
"//pkg/util/mon",
Expand Down
20 changes: 20 additions & 0 deletions pkg/kv/kvserver/store.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ import (
"github.com/cockroachdb/cockroach/pkg/util/iterutil"
"github.com/cockroachdb/cockroach/pkg/util/limit"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/cockroachdb/cockroach/pkg/util/log/logcrash"
"github.com/cockroachdb/cockroach/pkg/util/metric"
"github.com/cockroachdb/cockroach/pkg/util/mon"
"github.com/cockroachdb/cockroach/pkg/util/protoutil"
Expand Down Expand Up @@ -125,6 +126,13 @@ var storeSchedulerConcurrency = envutil.EnvOrDefaultInt(
var logSSTInfoTicks = envutil.EnvOrDefaultInt(
"COCKROACH_LOG_SST_INFO_TICKS_INTERVAL", 60)

// By default, telemetry events are emitted once per hour, per store:
// (10s tick interval) * 6 * 60 = 3600s = 1h.
var logStoreTelemetryTicks = envutil.EnvOrDefaultInt(
"COCKROACH_LOG_STORE_TELEMETRY_TICKS_INTERVAL",
6*60,
)

// bulkIOWriteLimit is defined here because it is used by BulkIOWriteLimiter.
var bulkIOWriteLimit = settings.RegisterByteSizeSetting(
settings.TenantWritable,
Expand Down Expand Up @@ -3343,6 +3351,18 @@ func (s *Store) ComputeMetrics(ctx context.Context, tick int) error {
// will not contain the log prefix.
log.Infof(ctx, "\n%s", m.Metrics)
}
// Periodically emit a store stats structured event to the TELEMETRY channel,
// if reporting is enabled. These events are intended to be emitted at low
// frequency. Trigger on every (N-1)-th tick to avoid spamming the telemetry
// channel if crash-looping.
if logcrash.DiagnosticsReportingEnabled.Get(&s.ClusterSettings().SV) &&
tick%logStoreTelemetryTicks == logStoreTelemetryTicks-1 {
// The stats event is populated from a subset of the Metrics.
e := m.AsStoreStatsEvent()
e.NodeId = int32(s.NodeID())
e.StoreId = int32(s.StoreID())
log.StructuredEvent(ctx, &e)
}
return nil
}

Expand Down
1 change: 1 addition & 0 deletions pkg/storage/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ go_library(
"//pkg/util/humanizeutil",
"//pkg/util/iterutil",
"//pkg/util/log",
"//pkg/util/log/eventpb",
"//pkg/util/mon",
"//pkg/util/protoutil",
"//pkg/util/syncutil",
Expand Down
60 changes: 60 additions & 0 deletions pkg/storage/engine.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import (
"github.com/cockroachdb/cockroach/pkg/util/hlc"
"github.com/cockroachdb/cockroach/pkg/util/iterutil"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/cockroachdb/cockroach/pkg/util/log/eventpb"
"github.com/cockroachdb/cockroach/pkg/util/protoutil"
"github.com/cockroachdb/cockroach/pkg/util/uuid"
"github.com/cockroachdb/errors"
Expand Down Expand Up @@ -1036,6 +1037,65 @@ func (m *Metrics) CompactedBytes() (read, written uint64) {
return read, written
}

// AsStoreStatsEvent converts a Metrics struct into an eventpb.StoreStats event,
// suitable for logging to the telemetry channel.
func (m *Metrics) AsStoreStatsEvent() eventpb.StoreStats {
e := eventpb.StoreStats{
CacheSize: m.BlockCache.Size,
CacheCount: m.BlockCache.Count,
CacheHits: m.BlockCache.Hits,
CacheMisses: m.BlockCache.Misses,
CompactionCountDefault: m.Compact.DefaultCount,
CompactionCountDeleteOnly: m.Compact.DeleteOnlyCount,
CompactionCountElisionOnly: m.Compact.ElisionOnlyCount,
CompactionCountMove: m.Compact.MoveCount,
CompactionCountRead: m.Compact.ReadCount,
CompactionCountRewrite: m.Compact.RewriteCount,
CompactionNumInProgress: m.Compact.NumInProgress,
CompactionMarkedFiles: int64(m.Compact.MarkedFiles),
FlushCount: m.Flush.Count,
MemtableSize: m.MemTable.Size,
MemtableCount: m.MemTable.Count,
MemtableZombieCount: m.MemTable.ZombieCount,
MemtableZombieSize: m.MemTable.ZombieSize,
WalLiveCount: m.WAL.Files,
WalLiveSize: m.WAL.Size,
WalObsoleteCount: m.WAL.ObsoleteFiles,
WalObsoleteSize: m.WAL.ObsoletePhysicalSize,
WalPhysicalSize: m.WAL.PhysicalSize,
WalBytesIn: m.WAL.BytesIn,
WalBytesWritten: m.WAL.BytesWritten,
TableObsoleteCount: m.Table.ObsoleteCount,
TableObsoleteSize: m.Table.ObsoleteSize,
TableZombieCount: m.Table.ZombieCount,
TableZombieSize: m.Table.ZombieSize,
RangeKeySetsCount: m.Keys.RangeKeySetsCount,
}
for i, l := range m.Levels {
if l.NumFiles == 0 {
continue
}
e.Levels = append(e.Levels, eventpb.LevelStats{
Level: uint32(i),
NumFiles: l.NumFiles,
SizeBytes: l.Size,
Score: float32(l.Score),
BytesIn: l.BytesIn,
BytesIngested: l.BytesIngested,
BytesMoved: l.BytesMoved,
BytesRead: l.BytesRead,
BytesCompacted: l.BytesCompacted,
BytesFlushed: l.BytesFlushed,
TablesCompacted: l.TablesCompacted,
TablesFlushed: l.TablesFlushed,
TablesIngested: l.TablesIngested,
TablesMoved: l.TablesMoved,
NumSublevels: l.Sublevels,
})
}
return e
}

// EnvStats is a set of RocksDB env stats, including encryption status.
type EnvStats struct {
// TotalFiles is the total number of files reported by rocksdb.
Expand Down
1 change: 1 addition & 0 deletions pkg/util/log/eventpb/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ proto_library(
"role_events.proto",
"session_events.proto",
"sql_audit_events.proto",
"storage_events.proto",
"telemetry.proto",
"zone_events.proto",
],
Expand Down
1 change: 1 addition & 0 deletions pkg/util/log/eventpb/PROTOS.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ EVENTPB_PROTOS = [
"cluster_events.proto",
"job_events.proto",
"health_events.proto",
"storage_events.proto",
"telemetry.proto",
]

Expand Down
7 changes: 7 additions & 0 deletions pkg/util/log/eventpb/event_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,13 @@ func TestEventJSON(t *testing.T) {
// `includeempty` annotation, so nothing is emitted, despite the presence of
// zero values.
{&SchemaSnapshotMetadata{SnapshotID: "", NumRecords: 0}, ""},

// Primitive fields with an `includeempty` annotation will emit their zero
// value.
{
&StoreStats{Levels: []LevelStats{{Level: 0, NumFiles: 1}, {Level: 6, NumFiles: 2}}},
`"Levels":[{"Level":0,"NumFiles":1},{"Level":6,"NumFiles":2}]`,
},
}

for _, tc := range testCases {
Expand Down
6 changes: 6 additions & 0 deletions pkg/util/log/eventpb/eventlog_channels_generated.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 12 additions & 0 deletions pkg/util/log/eventpb/eventpbgen/gen.go
Original file line number Diff line number Diff line change
Expand Up @@ -660,6 +660,18 @@ func (m *{{.GoType}}) AppendJSONFields(printComma bool, b redact.RedactableBytes
{{ if not .AllowZeroValue -}}
}
{{- end }}
{{- else if eq .FieldType "array_of_LevelStats"}}
if len(m.{{.FieldName}}) > 0 {
if printComma { b = append(b, ',')}; printComma = true
b = append(b, "\"{{.FieldName}}\":["...)
for i, l := range m.{{.FieldName}} {
if i > 0 { b = append(b, ',') }
b = append(b, '{')
printComma, b = l.AppendJSONFields(false, b)
b = append(b, '}')
}
b = append(b, ']')
}
{{- else if eq .FieldType "protobuf"}}
if m.{{.FieldName}} != nil {
if printComma { b = append(b, ',')}; printComma = true
Expand Down
Loading

0 comments on commit c74c215

Please sign in to comment.