Skip to content

Commit

Permalink
sql: add sql.mutations.max_row_size.log guardrail (large row logging)
Browse files Browse the repository at this point in the history
Addresses: cockroachdb#67400

Add sql.mutations.max_row_size.log, a new cluster setting which
controls large row logging. Rows larger than this size will have their
primary keys logged to the SQL_PERF or SQL_INTERNAL_PERF channels
whenever the SQL layer puts them into the KV layer.

This logging takes place in rowHelper, which is used by both
row.Inserter and row.Updater. Most of the work is plumbing
settings.Values and SessionData into rowHelper, and adding a new
structured event type.

Release note (ops change): A new cluster setting,
sql.mutations.max_row_size.log, was added, which controls large row
logging. Whenever a row larger than this size is written (or a single
column family if multiple column families are in use) a LargeRow event
is logged to the SQL_PERF channel (or a LargeRowInternal event is logged
to SQL_INTERNAL_PERF if the row was added by an internal query). This
could occur for INSERT, UPSERT, UPDATE, CREATE TABLE AS, CREATE INDEX,
ALTER TABLE, ALTER INDEX, IMPORT, or RESTORE statements. SELECT, DELETE,
TRUNCATE, and DROP are not affected by this setting.

Release justification: Low risk, high benefit change to existing
functionality. This adds logging whenever a large row is written to the
database. Default is 0 in 21.1 (meaning disabled).
  • Loading branch information
michae2 committed Sep 10, 2021
1 parent f199f43 commit 766806c
Show file tree
Hide file tree
Showing 30 changed files with 1,427 additions and 192 deletions.
41 changes: 41 additions & 0 deletions docs/generated/eventlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -1786,6 +1786,28 @@ are only emitted via external logging.
Events in this category are logged to the `SQL_PERF` channel.


### `large_row`

An event of type `large_row` is recorded when a statement tries to write a row larger than
cluster setting `sql.mutations.max_row_size.log` to the database. Multiple
LargeRow events will be recorded for statements writing multiple large rows.
LargeRow events are recorded before the transaction commits, so in the case
of transaction abort there will not be a corresponding row in the database.




#### Common fields

| Field | Description | Sensitive |
|--|--|--|
| `Timestamp` | The timestamp of the event. Expressed as nanoseconds since the Unix epoch. | no |
| `EventType` | The type of the event. | no |
| `RowSize` | | no |
| `TableID` | | no |
| `FamilyID` | | no |
| `PrimaryKey` | | yes |

### `slow_query`

An event of type `slow_query` is recorded when a query triggers the "slow query" condition.
Expand Down Expand Up @@ -1836,6 +1858,25 @@ are only emitted via external logging.
Events in this category are logged to the `SQL_INTERNAL_PERF` channel.


### `large_row_internal`

An event of type `large_row_internal` is recorded when an internal query tries to write a row
larger than cluster setting `sql.mutations.max_row_size.log` to the database.




#### Common fields

| Field | Description | Sensitive |
|--|--|--|
| `Timestamp` | The timestamp of the event. Expressed as nanoseconds since the Unix epoch. | no |
| `EventType` | The type of the event. | no |
| `RowSize` | | no |
| `TableID` | | no |
| `FamilyID` | | no |
| `PrimaryKey` | | yes |

### `slow_query_internal`

An event of type `slow_query_internal` is recorded when a query triggers the "slow query" condition,
Expand Down
1 change: 1 addition & 0 deletions docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ sql.metrics.statement_details.plan_collection.enabled boolean true periodically
sql.metrics.statement_details.plan_collection.period duration 5m0s the time until a new logical plan is collected
sql.metrics.statement_details.threshold duration 0s minimum execution time to cause statement statistics to be collected. If configured, no transaction stats are collected.
sql.metrics.transaction_details.enabled boolean true collect per-application transaction statistics
sql.mutations.max_row_size.log byte size 0 B maximum size of row (or column family if multiple column families are in use) that SQL can write to the database, above which an event is logged to SQL_PERF (or SQL_INTERNAL_PERF if the mutating statement was internal); setting to 0 disables large row logging
sql.notices.enabled boolean true enable notices in the server/client protocol being sent
sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled boolean false if enabled, uniqueness checks may be planned for mutations of UUID columns updated with gen_random_uuid(); otherwise, uniqueness is assumed due to near-zero collision probability
sql.spatial.experimental_box2d_comparison_operators.enabled boolean false enables the use of certain experimental box2d comparison operators
Expand Down
1 change: 1 addition & 0 deletions docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@
<tr><td><code>sql.metrics.statement_details.plan_collection.period</code></td><td>duration</td><td><code>5m0s</code></td><td>the time until a new logical plan is collected</td></tr>
<tr><td><code>sql.metrics.statement_details.threshold</code></td><td>duration</td><td><code>0s</code></td><td>minimum execution time to cause statement statistics to be collected. If configured, no transaction stats are collected.</td></tr>
<tr><td><code>sql.metrics.transaction_details.enabled</code></td><td>boolean</td><td><code>true</code></td><td>collect per-application transaction statistics</td></tr>
<tr><td><code>sql.mutations.max_row_size.log</code></td><td>byte size</td><td><code>0 B</code></td><td>maximum size of row (or column family if multiple column families are in use) that SQL can write to the database, above which an event is logged to SQL_PERF (or SQL_INTERNAL_PERF if the mutating statement was internal); setting to 0 disables large row logging</td></tr>
<tr><td><code>sql.notices.enabled</code></td><td>boolean</td><td><code>true</code></td><td>enable notices in the server/client protocol being sent</td></tr>
<tr><td><code>sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled</code></td><td>boolean</td><td><code>false</code></td><td>if enabled, uniqueness checks may be planned for mutations of UUID columns updated with gen_random_uuid(); otherwise, uniqueness is assumed due to near-zero collision probability</td></tr>
<tr><td><code>sql.spatial.experimental_box2d_comparison_operators.enabled</code></td><td>boolean</td><td><code>false</code></td><td>enables the use of certain experimental box2d comparison operators</td></tr>
Expand Down
8 changes: 4 additions & 4 deletions pkg/ccl/importccl/import_processor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ func TestConverterFlushesBatches(t *testing.T) {
}

ctx := context.Background()
evalCtx := tree.MakeTestingEvalContext(nil)
evalCtx := tree.MakeTestingEvalContext(cluster.MakeTestingClusterSettings())

tests := []testSpec{
newTestSpec(ctx, t, csvFormat(), "testdata/csv/data-0"),
Expand Down Expand Up @@ -228,7 +228,7 @@ func TestImportIgnoresProcessedFiles(t *testing.T) {
defer log.Scope(t).Close(t)
ctx := context.Background()

evalCtx := tree.MakeTestingEvalContext(nil)
evalCtx := tree.MakeTestingEvalContext(cluster.MakeTestingClusterSettings())
flowCtx := &execinfra.FlowCtx{
EvalCtx: &evalCtx,
Cfg: &execinfra.ServerConfig{
Expand Down Expand Up @@ -329,7 +329,7 @@ func TestImportHonorsResumePosition(t *testing.T) {
pkBulkAdder := &doNothingKeyAdder{}
ctx := context.Background()

evalCtx := tree.MakeTestingEvalContext(nil)
evalCtx := tree.MakeTestingEvalContext(cluster.MakeTestingClusterSettings())
flowCtx := &execinfra.FlowCtx{
EvalCtx: &evalCtx,
Cfg: &execinfra.ServerConfig{
Expand Down Expand Up @@ -457,7 +457,7 @@ func TestImportHandlesDuplicateKVs(t *testing.T) {

batchSize := 13
defer row.TestingSetDatumRowConverterBatchSize(batchSize)()
evalCtx := tree.MakeTestingEvalContext(nil)
evalCtx := tree.MakeTestingEvalContext(cluster.MakeTestingClusterSettings())
flowCtx := &execinfra.FlowCtx{
EvalCtx: &evalCtx,
Cfg: &execinfra.ServerConfig{
Expand Down
4 changes: 3 additions & 1 deletion pkg/ccl/importccl/import_stmt.go
Original file line number Diff line number Diff line change
Expand Up @@ -2537,7 +2537,9 @@ func (r *importResumer) dropTables(
// older-format (v1.1) descriptor. This enables ClearTableData to use a
// RangeClear for faster data removal, rather than removing by chunks.
empty[i].TableDesc().DropTime = dropTime
if err := gcjob.ClearTableData(ctx, execCfg.DB, execCfg.DistSender, execCfg.Codec, empty[i]); err != nil {
if err := gcjob.ClearTableData(
ctx, execCfg.DB, execCfg.DistSender, execCfg.Codec, &execCfg.Settings.SV, empty[i],
); err != nil {
return errors.Wrapf(err, "clearing data for table %d", empty[i].GetID())
}
}
Expand Down
1 change: 1 addition & 0 deletions pkg/sql/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -594,6 +594,7 @@ go_test(
"//pkg/util/log/channel",
"//pkg/util/log/eventpb",
"//pkg/util/log/logconfig",
"//pkg/util/log/logpb",
"//pkg/util/metric",
"//pkg/util/mon",
"//pkg/util/protoutil",
Expand Down
14 changes: 11 additions & 3 deletions pkg/sql/backfill.go
Original file line number Diff line number Diff line change
Expand Up @@ -818,7 +818,9 @@ func TruncateInterleavedIndexes(
resumeAt := resume
// Make a new txn just to drop this chunk.
if err := db.Txn(ctx, func(ctx context.Context, txn *kv.Txn) error {
rd := row.MakeDeleter(codec, table, nil /* requestedCols */)
rd := row.MakeDeleter(
codec, table, nil /* requestedCols */, &execCfg.Settings.SV, true, /* internal */
)
td := tableDeleter{rd: rd, alloc: alloc}
if err := td.init(ctx, txn, nil /* *tree.EvalContext */); err != nil {
return err
Expand Down Expand Up @@ -893,7 +895,10 @@ func (sc *SchemaChanger) truncateIndexes(
if err != nil {
return err
}
rd := row.MakeDeleter(sc.execCfg.Codec, tableDesc, nil /* requestedCols */)
rd := row.MakeDeleter(
sc.execCfg.Codec, tableDesc, nil /* requestedCols */, &sc.settings.SV,
true, /* internal */
)
td := tableDeleter{rd: rd, alloc: alloc}
if err := td.init(ctx, txn, nil /* *tree.EvalContext */); err != nil {
return err
Expand Down Expand Up @@ -2484,7 +2489,10 @@ func indexTruncateInTxn(
alloc := &rowenc.DatumAlloc{}
var sp roachpb.Span
for done := false; !done; done = sp.Key == nil {
rd := row.MakeDeleter(execCfg.Codec, tableDesc, nil /* requestedCols */)
rd := row.MakeDeleter(
execCfg.Codec, tableDesc, nil /* requestedCols */, &execCfg.Settings.SV,
evalCtx.SessionData.Internal,
)
td := tableDeleter{rd: rd, alloc: alloc}
if err := td.init(ctx, txn, evalCtx); err != nil {
return err
Expand Down
2 changes: 2 additions & 0 deletions pkg/sql/backfill/backfill.go
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,8 @@ func (cb *ColumnBackfiller) RunColumnBackfillChunk(
requestedCols,
row.UpdaterOnlyColumns,
&cb.alloc,
&cb.evalCtx.Settings.SV,
cb.evalCtx.SessionData.Internal,
)
if err != nil {
return roachpb.Key{}, err
Expand Down
5 changes: 4 additions & 1 deletion pkg/sql/create_table.go
Original file line number Diff line number Diff line change
Expand Up @@ -497,7 +497,10 @@ func (n *createTableNode) startExec(params runParams) error {
params.ExecCfg().Codec,
desc.ImmutableCopy().(catalog.TableDescriptor),
desc.Columns,
params.p.alloc)
params.p.alloc,
&params.ExecCfg().Settings.SV,
params.p.SessionData().Internal,
)
if err != nil {
return err
}
Expand Down
Loading

0 comments on commit 766806c

Please sign in to comment.