Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stmtdiagnostics: support probabilistic bundle collection #82750

Merged
merged 2 commits into from
Jun 16, 2022

Conversation

irfansharif
Copy link
Contributor

@irfansharif irfansharif commented Jun 10, 2022

We enable the following ability (the first half of #82896):

  Pick a stmt fingerprint, declare a sampling probability which controls
  when verbose tracing is enabled for it, and a latency threshold for
  which a trace is persisted.

With a given stmt rate (say 1000/s) and a given percentile we're trying
to capture (say p99.9), we have 0.001R stmt/s in the 99.9th percentile
(1/s in our example). We should be able to set a sampling probability P
such that with high likelihood (>95%) we capture at least one trace over
the next S seconds. The sampling rate lets you control how the overhead
you’re introducing for those statements in aggregate, which if dialed up
higher lets you lower S. You might want to do such a thing for
infrequently executed statements. We do all this using the existing
statement diagnostics machinery. It looks roughly as follows

   > SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO ...', -- fingerprint
      0.01,              -- sampling probability
      '120ms'::INTERVAL, -- latency target
      '15m'::INTERVAL    -- request expiration
     );

   $ cockroach statement-diag list --insecure
    No statement diagnostics bundles available.
    Outstanding activation requests:
      ID                  Statement        Sampling probability  Min latency
      770367610894417921  INSERT INTO ...  0.0100                90ms

   # wait for an eventual capture..

   $ cockroach statement-diag list --insecure
    Statement diagnostics bundles:
      ID                  Collection time          Statement
      770367073624621057  2022-06-14 00:49:33 UTC  INSERT INTO ...

   $ cockroach statement-diag download 770367073624621057 --insecure
   Bundle saved to "stmt-bundle-770367073624621057.zip"

   $ unzip stmt-bundle-770367073624621057.zip stmt
   $ cat stmt/trace.txt
    ...
        0.846ms      0.017ms      event:kv/kvserver/spanlatch/manager.go:532
                                    [n1,s1,r76/1:/Table/10{7/1-8}] waiting to
                                    acquire read latch /Table/107/1/41/7/0@0,0,
                                    held by write latch
                                    /Table/107/1/41/7/0@1655167773.362147000,0
        98.776ms     97.930ms     event:kv/kvserver/concurrency/concurrency_manager.go:301
                                    [n1,s1,r76/1:/Table/10{7/1-8}] scanning
                                    lock table for conflicting locks

We leave wiring this up to the UI and continuous capture (second half of
#82896) to future PRs.

Release note: None

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@irfansharif
Copy link
Contributor Author

Still have a few TODOs but read for a look. Have to confirm it works with more things.

@irfansharif irfansharif marked this pull request as ready for review June 13, 2022 23:24
@irfansharif irfansharif requested a review from a team as a code owner June 13, 2022 23:24
@irfansharif irfansharif requested a review from a team June 13, 2022 23:24
@irfansharif irfansharif requested review from a team as code owners June 13, 2022 23:24
@irfansharif irfansharif requested review from a team, yuzefovich and andreimatei and removed request for a team June 13, 2022 23:24
Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 4 of 4 files at r3, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @andreimatei)

@yuzefovich
Copy link
Member

nit: don't forget to update the PR description.

@irfansharif
Copy link
Contributor Author

Done, thanks!

bors r+

@craig
Copy link
Contributor

craig bot commented Jun 16, 2022

Build succeeded:

@craig craig bot merged commit 1cda468 into cockroachdb:master Jun 16, 2022
@irfansharif irfansharif deleted the 220610.stmt-diag-sampling branch June 16, 2022 02:48
@tbg
Copy link
Member

tbg commented Jun 16, 2022

Nice! This is really useful.

@irfansharif
Copy link
Contributor Author

@tbg You should try out #83020, which makes it even more so.

irfansharif added a commit to irfansharif/cockroach that referenced this pull request Jun 17, 2022
..until expiry. Informs cockroachdb#82896 (more specifically this is a short-term
alternative to the part pertaining to continuous tail capture). The
issue has more background, but we repeat some below for posterity.

It's desirable to draw from a set of tail execution traces collected
over time when investigating tail latencies. cockroachdb#82750 introduced a
probabilistic mechanism to capture a single tail event for a individual
stmts with bounded overhead (determined by the sampling probability,
trading off how long until a single capture is obtained). This PR
introduces a `sql.stmt_diagnostics.collect_continuously_until_expired`
to collect captures continuously over some period of time for aggregate
analysis. Longer term we'd want:
- Controls over the maximum number of captures we'd want stored over
  some period of time;
- Eviction of older bundles, assuming they're less relevant, making room
  for newer captures.

To safeguard against misuse (in this current form we should only use it
for experiments or escalations under controlled environments), we only
act on this setting provided the diagnostics request has an expiration
timestamp and a specified probability, crude measures to prevent
unbounded growth.

---

To get some idea of how this can be used, consider the kinds of
experiments we're running as part of cockroachdb#75066. Specifically we have a
reproduction where we can observe spikes in latencies for foreground
traffic in the presence of concurrent backups (incremental/full). In an
experiment with incremental backups running every 10m, with full backups
running every 35m (`RECURRING '*/10 * * * *' FULL BACKUP '35 * * * *'`),
we observe latency spikes during overlap periods. With this cluster
setting we were able to set up trace captures over a 10h window to get a
set of representative outlier traces to investigate further.

    > SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO new_order(no_o_id, ...)', -- stmt fingerprint
      0.05,                                  -- 5% sampling probability
      '30ms'::INTERVAL,                      -- 30ms target (p99.9)
      '10h'::INTERVAL                        -- capture window
    );

    > WITH histogram AS
         (SELECT extract('minute', collected_at) AS minute,
                 count(*) FROM system.statement_diagnostics
          GROUP BY minute)
    SELECT minute, repeat('*', (30 * count/(max(count) OVER ()))::INT8) AS freq
    FROM histogram
    ORDER BY count DESC
    LIMIT 10;

      minute |              freq
    ---------+---------------------------------
          36 | ******************************
          38 | *********************
          35 | *********************
          00 | *********************
          37 | ********************
          30 | ********************
          40 | *****************
          20 | **************
          10 | *************
          50 | ***********
    (10 rows)

We see that we captured just the set of bundles/traces we were interested in.

Release note: None
irfansharif added a commit to irfansharif/cockroach that referenced this pull request Jun 20, 2022
..until expiry. Informs cockroachdb#82896 (more specifically this is a short-term
alternative to the part pertaining to continuous tail capture). The
issue has more background, but we repeat some below for posterity.

It's desirable to draw from a set of tail execution traces collected
over time when investigating tail latencies. cockroachdb#82750 introduced a
probabilistic mechanism to capture a single tail event for a individual
stmts with bounded overhead (determined by the sampling probability,
trading off how long until a single capture is obtained). This PR
introduces a sql.stmt_diagnostics.collect_continuously.enabled to
collect captures continuously over some period of time for aggregate
analysis. Longer term we'd want:
- Controls over the maximum number of captures we'd want stored over
  some period of time;
- Eviction of older bundles, assuming they're less relevant, making room
  for newer captures.

To safeguard against misuse (in this current form we should only use it
for experiments or escalations under controlled environments), we only
act on this setting provided the diagnostics request has an expiration
timestamp and a specified probability, crude measures to prevent
unbounded growth.

---

To get some idea of how this can be used, consider the kinds of
experiments we're running as part of cockroachdb#75066. Specifically we have a
reproduction where we can observe spikes in latencies for foreground
traffic in the presence of concurrent backups (incremental/full). In an
experiment with incremental backups running every 10m, with full backups
running every 35m (`RECURRING '*/10 * * * *' FULL BACKUP '35 * * * *'`),
we observe latency spikes during overlap periods. With this cluster
setting we were able to set up trace captures over a 10h window to get a
set of representative outlier traces to investigate further.

    > SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO new_order(no_o_id, ...)', -- stmt fingerprint
      0.05,                                  -- 5% sampling probability
      '30ms'::INTERVAL,                      -- 30ms target (p99.9)
      '10h'::INTERVAL                        -- capture window
    );

    > WITH histogram AS
         (SELECT extract('minute', collected_at) AS minute,
                 count(*) FROM system.statement_diagnostics
          GROUP BY minute)
    SELECT minute, repeat('*', (30 * count/(max(count) OVER ()))::INT8) AS freq
    FROM histogram
    ORDER BY count DESC
    LIMIT 10;

      minute |              freq
    ---------+---------------------------------
          36 | ******************************
          38 | *********************
          35 | *********************
          00 | *********************
          37 | ********************
          30 | ********************
          40 | *****************
          20 | **************
          10 | *************
          50 | ***********
    (10 rows)

We see that we captured just the set of bundles/traces we were interested in.

Release note: None
Copy link
Contributor

@andreimatei andreimatei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained


pkg/cli/clisqlclient/statement_diag.go line 107 at r3 (raw file):

// TODO(irfansharif): Remove this in 23.1.
func isAtLeast22dot2ClusterVersion(ctx context.Context, conn Conn) (bool, error) {

how come we don't look at clusterversion.SampledStmtDiagReqs ?

@irfansharif
Copy link
Contributor Author

pkg/cli/clisqlclient/statement_diag.go line 107 at r3 (raw file):

Previously, andreimatei (Andrei Matei) wrote…

how come we don't look at clusterversion.SampledStmtDiagReqs ?

This is code running in the CLI where we don’t have gossiped cluster versions to check against. I guess we could've polled the cluster for the active version instead of checking the schema, but this felt reasonabl (the real reason is I was lazy and just cargo culted 0770254#diff-e32ce0eb18132e16e033d89268f1c7d5272b34e1db2f28290e3c55cf63085681L103-L104).

Copy link
Contributor

@andreimatei andreimatei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained


pkg/cli/clisqlclient/statement_diag.go line 107 at r3 (raw file):

Previously, irfansharif (irfan sharif) wrote…

This is code running in the CLI where we don’t have gossiped cluster versions to check against. I guess we could've polled the cluster for the active version instead of checking the schema, but this felt reasonabl (the real reason is I was lazy and just cargo culted 0770254#diff-e32ce0eb18132e16e033d89268f1c7d5272b34e1db2f28290e3c55cf63085681L103-L104).

ack

irfansharif added a commit to irfansharif/cockroach that referenced this pull request Sep 24, 2022
..until expiry. Informs cockroachdb#82896 (more specifically this is a short-term
alternative to the part pertaining to continuous tail capture). The
issue has more background, but we repeat some below for posterity.

It's desirable to draw from a set of tail execution traces collected
over time when investigating tail latencies. cockroachdb#82750 introduced a
probabilistic mechanism to capture a single tail event for a individual
stmts with bounded overhead (determined by the sampling probability,
trading off how long until a single capture is obtained). This PR
introduces a sql.stmt_diagnostics.collect_continuously.enabled to
collect captures continuously over some period of time for aggregate
analysis. Longer term we'd want:
- Controls over the maximum number of captures we'd want stored over
  some period of time;
- Eviction of older bundles, assuming they're less relevant, making room
  for newer captures.

To safeguard against misuse (in this current form we should only use it
for experiments or escalations under controlled environments), we only
act on this setting provided the diagnostics request has an expiration
timestamp and a specified probability, crude measures to prevent
unbounded growth.

---

To get some idea of how this can be used, consider the kinds of
experiments we're running as part of cockroachdb#75066. Specifically we have a
reproduction where we can observe spikes in latencies for foreground
traffic in the presence of concurrent backups (incremental/full). In an
experiment with incremental backups running every 10m, with full backups
running every 35m (`RECURRING '*/10 * * * *' FULL BACKUP '35 * * * *'`),
we observe latency spikes during overlap periods. With this cluster
setting we were able to set up trace captures over a 10h window to get a
set of representative outlier traces to investigate further.

    > SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO new_order(no_o_id, ...)', -- stmt fingerprint
      0.05,                                  -- 5% sampling probability
      '30ms'::INTERVAL,                      -- 30ms target (p99.9)
      '10h'::INTERVAL                        -- capture window
    );

    > WITH histogram AS
         (SELECT extract('minute', collected_at) AS minute,
                 count(*) FROM system.statement_diagnostics
          GROUP BY minute)
    SELECT minute, repeat('*', (30 * count/(max(count) OVER ()))::INT8) AS freq
    FROM histogram
    ORDER BY count DESC
    LIMIT 10;

      minute |              freq
    ---------+---------------------------------
          36 | ******************************
          38 | *********************
          35 | *********************
          00 | *********************
          37 | ********************
          30 | ********************
          40 | *****************
          20 | **************
          10 | *************
          50 | ***********
    (10 rows)

We see that we captured just the set of bundles/traces we were interested in.

Release note: None
irfansharif added a commit to irfansharif/cockroach that referenced this pull request Sep 27, 2022
..until expiry. Informs cockroachdb#82896 (more specifically this is a short-term
alternative to the part pertaining to continuous tail capture). The
issue has more background, but we repeat some below for posterity.

It's desirable to draw from a set of tail execution traces collected
over time when investigating tail latencies. cockroachdb#82750 introduced a
probabilistic mechanism to capture a single tail event for a individual
stmts with bounded overhead (determined by the sampling probability,
trading off how long until a single capture is obtained). This PR
introduces a sql.stmt_diagnostics.collect_continuously.enabled to
collect captures continuously over some period of time for aggregate
analysis. To get some idea of how this can be used, consider the kinds
of experiments we're running as part of cockroachdb#75066. Specifically we have a
reproduction where we can observe spikes in latencies for foreground
traffic in the presence of concurrent backups (incremental/full). In an
experiment with incremental backups running every 10m, with full backups
running every 35m (`RECURRING '*/10 * * * *' FULL BACKUP '35 * * * *'`),
we observe latency spikes during overlap periods. With this cluster
setting we were able to set up trace captures over a 10h window to get a
set of representative outlier traces to investigate further.

    SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO new_order(no_o_id, ...)', -- stmt fingerprint
      0.05,                                  -- 5% sampling probability
      '30ms'::INTERVAL,                      -- 30ms target (p99.9)
      '10h'::INTERVAL                        -- capture window
    );

    WITH histogram AS
         (SELECT extract('minute', collected_at) AS minute,
                 count(*) FROM system.statement_diagnostics
          GROUP BY minute)
    SELECT minute, repeat('*', (30 * count/(max(count) OVER ()))::INT8) AS freq
    FROM histogram
    ORDER BY count DESC
    LIMIT 10;

      minute |              freq
    ---------+---------------------------------
          36 | ******************************
          38 | *********************
          35 | *********************
          00 | *********************
          37 | ********************
          30 | ********************
          40 | *****************
          20 | **************
          10 | *************
          50 | ***********
    (10 rows)

We see that we captured just the set of bundles/traces we were
interested in. Longer term we'd want:
- Controls over the maximum number of captures we'd want stored over
  some period of time;
- Eviction of older bundles, assuming they're less relevant, making room
  for newer captures.
To safeguard against misuse (in this current form we should only use it
for experiments or escalations under controlled environments), we only
act on this setting provided the diagnostics request has an expiration
timestamp and a specified probability, crude measures to prevent
unbounded growth.

Release note: None
craig bot pushed a commit that referenced this pull request Sep 27, 2022
83020: stmtdiagnostics: support continuous bundle collection r=irfansharif a=irfansharif

..until expiry. Informs #82896 (more specifically this is a short-term
alternative to the part pertaining to continuous tail capture). The
issue has more background, but we repeat some below for posterity.

It's desirable to draw from a set of tail execution traces collected
over time when investigating tail latencies. #82750 introduced a
probabilistic mechanism to capture a single tail event for a individual
stmts with bounded overhead (determined by the sampling probability,
trading off how long until a single capture is obtained). This PR
introduces a `sql.stmt_diagnostics.collect_continuously_until_expired`
to collect captures continuously over some period of time for aggregate
analysis. Longer term we'd want:
- Controls over the maximum number of captures we'd want stored over
  some period of time;
- Eviction of older bundles, assuming they're less relevant, making room
  for newer captures.

To safeguard against misuse (in this current form we should only use it
for experiments or escalations under controlled environments), we only
act on this setting provided the diagnostics request has an expiration
timestamp and a specified probability, crude measures to prevent
unbounded growth.

---

To get some idea of how this can be used, consider the kinds of
experiments we're running as part of #75066. Specifically we have a
reproduction where we can observe spikes in latencies for foreground
traffic in the presence of concurrent backups (incremental/full). In an
experiment with incremental backups running every 10m, with full backups
running every 35m (`RECURRING '*/10 * * * *' FULL BACKUP '35 * * * *'`),
we observe latency spikes during overlap periods. With this cluster
setting we were able to set up trace captures over a 10h window to get a
set of representative outlier traces to investigate further.

    > SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO new_order(no_o_id, ...)', -- stmt fingerprint
      0.05,                                  -- 5% sampling probability
      '30ms'::INTERVAL,                      -- 30ms target (p99.9)
      '10h'::INTERVAL                        -- capture window
    );

    > WITH histogram AS
         (SELECT extract('minute', collected_at) AS minute,
                 count(*) FROM system.statement_diagnostics
          GROUP BY minute)
    SELECT minute, repeat('*', (30 * count/(max(count) OVER ()))::INT8) AS freq
    FROM histogram
    ORDER BY count DESC
    LIMIT 10;

      minute |              freq
    ---------+---------------------------------
          36 | ******************************
          38 | *********************
          35 | *********************
          00 | *********************
          37 | ********************
          30 | ********************
          40 | *****************
          20 | **************
          10 | *************
          50 | ***********
    (10 rows)

We see that we captured just the set of bundles/traces we were interested in.

Release note: None

86591: kvserver: sync checksum computation with long poll r=erikgrinaker a=pavelkalinnikov

Previously, the checksum computation would run until completion unconditionally
(unless the collection request comes before it). This is not the best spend of
the limited pool capacity, because the result of this computation may never be
requested.

After this commit, the checksum computation task is synchronized with the
checksum collection request. Both wait at most 5 seconds until the other party
has joined. Once joined, the computation starts, otherwise skips.

If any party abandons the request, then the `replicaChecksum` record is preserved
in the state, and is scheduled for a GC later. This is to help the other party
to fail fast, instead of waiting, if it arrives late.

This change also removes the no longer needed concurrency limit for the tasks,
because tasks are canceled reliably and will not pile up.

Fixes #77432

Release note (performance improvement): consistency checks are now properly
cancelled on timeout, preventing them from piling up.

88768: ci: add MacOS ARM CI config r=jlinder a=rail

Previously, MacOS64 ARM64 platform was added, but CI wouldn't run it.

This PR adds a CI platform to build MacOS ARM64 binaries.

Release note: None

Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>
Co-authored-by: Pavel Kalinnikov <pavel@cockroachlabs.com>
Co-authored-by: Rail Aliiev <rail@iqchoice.com>
blathers-crl bot pushed a commit that referenced this pull request Sep 28, 2022
..until expiry. Informs #82896 (more specifically this is a short-term
alternative to the part pertaining to continuous tail capture). The
issue has more background, but we repeat some below for posterity.

It's desirable to draw from a set of tail execution traces collected
over time when investigating tail latencies. #82750 introduced a
probabilistic mechanism to capture a single tail event for a individual
stmts with bounded overhead (determined by the sampling probability,
trading off how long until a single capture is obtained). This PR
introduces a sql.stmt_diagnostics.collect_continuously.enabled to
collect captures continuously over some period of time for aggregate
analysis. To get some idea of how this can be used, consider the kinds
of experiments we're running as part of #75066. Specifically we have a
reproduction where we can observe spikes in latencies for foreground
traffic in the presence of concurrent backups (incremental/full). In an
experiment with incremental backups running every 10m, with full backups
running every 35m (`RECURRING '*/10 * * * *' FULL BACKUP '35 * * * *'`),
we observe latency spikes during overlap periods. With this cluster
setting we were able to set up trace captures over a 10h window to get a
set of representative outlier traces to investigate further.

    SELECT crdb_internal.request_statement_bundle(
      'INSERT INTO new_order(no_o_id, ...)', -- stmt fingerprint
      0.05,                                  -- 5% sampling probability
      '30ms'::INTERVAL,                      -- 30ms target (p99.9)
      '10h'::INTERVAL                        -- capture window
    );

    WITH histogram AS
         (SELECT extract('minute', collected_at) AS minute,
                 count(*) FROM system.statement_diagnostics
          GROUP BY minute)
    SELECT minute, repeat('*', (30 * count/(max(count) OVER ()))::INT8) AS freq
    FROM histogram
    ORDER BY count DESC
    LIMIT 10;

      minute |              freq
    ---------+---------------------------------
          36 | ******************************
          38 | *********************
          35 | *********************
          00 | *********************
          37 | ********************
          30 | ********************
          40 | *****************
          20 | **************
          10 | *************
          50 | ***********
    (10 rows)

We see that we captured just the set of bundles/traces we were
interested in. Longer term we'd want:
- Controls over the maximum number of captures we'd want stored over
  some period of time;
- Eviction of older bundles, assuming they're less relevant, making room
  for newer captures.
To safeguard against misuse (in this current form we should only use it
for experiments or escalations under controlled environments), we only
act on this setting provided the diagnostics request has an expiration
timestamp and a specified probability, crude measures to prevent
unbounded growth.

Release note: None
irfansharif added a commit to irfansharif/cockroach that referenced this pull request Oct 11, 2022
This commit introduces a backportable alternative to cockroachdb#82750 and cockroachdb#83020,
to help solve for cockroachdb#82896. It gives a palatable alternative to
sql.trace.stmt.enable_threshold. See those PRs/issues for verbose
commentary, but roughly we can do the following:

  Pick a stmt fingerprint, declare a sampling probability which controls
  when verbose tracing is enabled for it, and a latency threshold for
  which a trace is logged. With a given stmt rate (say 1000/s) and a
  given percentile we're trying to capture (say p99.9), we have N stmt/s
  in the 99.9th percentile (1 in our example). We should be able to set
  a sampling probability P such that with high likelihood (>95%) we
  capture at least one trace captured over the next S seconds. The
  sampling rate lets you control how the overhead you’re introducing for
  those statements in aggregate, which if dialed up higher lets you
  lower S. You might want to do such a thing for infrequently executed
  statements.

This commit does the simplest thing: ask for all input through shoddy
cluster settings and just log traces to our usual files. Below we
outline an example of how to use these settings to catch outlier
executions for writes to the history table in TPC-C. When using it, we
did not see an overall increase in p99s as a result of the sampling. It
also took only about 10s to capture this data, showing clearly that it
was due to latch waits.

  > SELECT
      encode(fingerprint_id, 'hex'),
      (statistics -> 'statistics' ->> 'cnt')::INT AS count,
      metadata ->> 'query' AS query
    FROM system.statement_statistics
    ORDER BY COUNT DESC limit 10;

           encode    | count |                                             query
    -----------------+-------+--------------------------------------------------------------------------------------------------------------------
    ...
    4e4214880f87d799 |  2680 | INSERT INTO history(h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_amount, h_date, h_data) VALUES ($1, $2, __more6__)

    > SET CLUSTER SETTING trace.fingerprint = '4e4214880f87d799'; -- fingerprint
    > SET CLUSTER SETTING trace.fingerprint.probability = 0.01;   -- 1% sampling probability
    > SET CLUSTER SETTING trace.fingerprint.threshold = '90ms';   -- latency threshold

     0.521ms      0.005ms    event:kv/kvserver/concurrency/concurrency_manager.go:260 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] acquiring latches›
     0.532ms      0.011ms    event:kv/kvserver/spanlatch/manager.go:532 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] waiting to acquire read latch /Table/109/1/105/3/519/0@0,0, held by write latch /Table/109/1/105/3/519/0@1654849354.483984000,0›
    99.838ms     99.306ms    event:kv/kvserver/concurrency/concurrency_manager.go:301 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] scanning lock table for conflicting locks›
    99.851ms      0.013ms    event:kv/kvserver/replica_read.go:251 [n1,s1,r105/1:/Table/109/1/{90/7/2…-105/10…}] executing read-only batch›

Compare this to sql.trace.stmt.enable_threshold which enables verbose
tracing for all statements with 100% probability. It introduces far too
much overhead for it to be used in reasonable production clusters. The
overhead also often masks the problems we're looking to capture.

Release note (general change): We introduced a trifecta of three cluster
settings to collect trace data for outlier executions with low overhead.
This is only going to be available in 22.1; in 22.2 and beyond we have
other mechanisms to collect outlier traces. Traces come in handy when
looking to investigate latency spikes, and these three settings are
intended to supplant most uses of sql.trace.stmt.enable_threshold. That
setting enables verbose tracing for all statements with 100% probability
which can cause a lot of overhead in production clusters, and also a lot
of logging pressure. Instead we introduce the following:
- trace.fingerprint
- trace.fingerprint.probability
- trace.fingerprint.threshold
Put together (all have to be set) it only enables tracing for the statement
with the set hex-encoded fingerprint, does so probabilistically (where
probability is whatever trace.fingerprint.probability is set to),
logging it only if the latency threshold is exceeded (configured using
trace.fingerprint.threshold). To obtain a hex-encoded fingerprint,
consider looking at the contents of system.statement_statistics. For
example:

  > SELECT
      encode(fingerprint_id, 'hex'),
      (statistics -> 'statistics' ->> 'cnt')::INT AS count,
      metadata ->> 'query' AS query
    FROM system.statement_statistics
    ORDER BY COUNT DESC limit 10;

           encode    | count |                                             query
    -----------------+-------+--------------------------------------------------------------------------------------------------------------------
    ...
    4e4214880f87d799 |  2680 | INSERT INTO history(h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_amount, h_date, h_data) VALUES ($1, $2, __more6__)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants