Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: measure CPU time spent during SQL execution #93952

Merged
merged 1 commit into from
Jan 19, 2023

Conversation

DrewKimball
Copy link
Collaborator

This commit adds tracking for CPU time spent during SQL execution. The CPU time is tracked at the operator granularity when statistics collection is enabled, similar to execution time.

For now, the CPU time is only surfaced in the output of EXPLAIN ANALYZE variants. A future PR will add support for logging this value in the statement statistics.

Informs: #87213

Release note (sql change): CPU time spent during SQL execution is now visible in the output of queries run with EXPLAIN ANALYZE. This measure does not include CPU time spent while serving KV requests. This can be useful for diagnosing performance issues and optimizing SQL queries.

@DrewKimball DrewKimball requested review from yuzefovich and a team December 20, 2022 00:53
@DrewKimball DrewKimball requested a review from a team as a code owner December 20, 2022 00:53
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Can you share some testing results (manual, or maybe that single test that is disabled for tenants)?

Reviewed 12 of 12 files at r1, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @DrewKimball)


-- commits line 6 at r1:
nit: it'd be good to explicitly mention the goroutines that SQL execution uses but which are currently not tracked by this work (since those should be relatively cheap). You mentioned on slack the async streamer goroutines, but I think the main goroutine of the hash router (colflow.HashRouter.Run) as well as the outbox (colrpc.Outbox.Run) are not tracked.


pkg/sql/colflow/stats.go line 111 at r1 (raw file):

	bic.ctx = ctx
	bic.stopwatch.Start()
	bic.cpuStopwatch.Start()

nit: do you think it'd be worth to extract an abstraction in this package that would combine two of the stop watches?


pkg/sql/execinfrapb/component_stats.proto line 156 at r1 (raw file):

  optional util.optional.Uint consumed_r_u = 4 [(gogoproto.nullable) = false];
  // CPU time spent executing the component.
  optional util.optional.Duration c_p_u_time = 5 [(gogoproto.nullable) = false];

nit: you could use something like optional util.optional.Duration cpu_time = 5 [(gogoproto.nullable) = false, (gogoproto.customname) = "CPUTime"] to get the same name. This seems a bit cleaner, but up to you (ditto for consumed_ru).


pkg/util/timeutil/cpustopwatch.go line 21 at r1 (raw file):

// CPUStopWatch is a utility stop watch for measuring CPU time spent by a
// component. It can be safely started and stopped multiple times, but is
// not safe to use concurrently. If CPUStopWatch is nil, all operations are

Hm, I think we needed the concurrency safety at some point, and I can't recall a PR that changed that. Do you think this is no longer needed?

@DrewKimball
Copy link
Collaborator Author

Here's explain analyze output for a simple join query:

  planning time: 182µs
  execution time: 1.7s
  distribution: full
  vectorized: true
  rows read from KV: 2,000,000 (170 MiB, 22 gRPC calls)
  cumulative time spent in KV: 1.6s
  maximum memory usage: 60 MiB
  network usage: 159 MiB (5,603 messages)
  sql cpu time: 1.9s
  regions: us-east1

  • hash join
  │ nodes: n1, n2, n3
  │ regions: us-east1
  │ actual row count: 1,000,000
  │ estimated max memory allocated: 135 MiB
  │ estimated max sql temp disk usage: 0 B
  │ sql cpu time: 519ms
  │ estimated row count: 43,722
  │ equality: (x, y) = (x, y)
  │
  ├── • scan
  │     nodes: n1, n2, n3
  │     regions: us-east1
  │     actual row count: 1,000,000
  │     KV time: 821ms
  │     KV contention time: 0µs
  │     KV rows read: 1,000,000
  │     KV bytes read: 85 MiB
  │     KV gRPC calls: 11
  │     estimated max memory allocated: 10 MiB
  │     sql cpu time: 699ms
  │     estimated row count: 1,000,000 (100% of the table; stats collected 4 minutes ago)
  │     table: xy@xy_pkey
  │     spans: FULL SCAN
  │
  └── • scan
        nodes: n1, n2, n3
        regions: us-east1
        actual row count: 1,000,000
        KV time: 752ms
        KV contention time: 0µs
        KV rows read: 1,000,000
        KV bytes read: 85 MiB
        KV gRPC calls: 11
        estimated max memory allocated: 10 MiB
        sql cpu time: 685ms
        estimated row count: 1,000,000 (100% of the table; stats collected 4 minutes ago)
        table: xy@xy_pkey
        spans: FULL SCAN

  Diagram: https://cockroachdb.github.io/distsqlplan/decode.html#eJzkWMtu4zYU3fcrCK6Sgon50NPAAG6DFM1M4wySwWwKo1AkNhYiSx6RbqQG-az-QL-skCXBtixxTNvxONOFFxRFmfece8_h5TMUXyLYh3eXv11efAI_gl9ub65BloOr4fDyFry_uRoWo5_uQJZTcDMEJ1l-niGQ5ef5KXhXDGk5puf5KUQwTgI-9CZcwP7vkEAEKUSQwRGC0zTxuRBJWkw9z1-8CjLYxwiG8XQmi8cjBP0k5bD_DGUoIw778JN3H_Fb7gU87WGIYMClF0bzz2f5IMv_mD7yHCJ4kUSzSSz6IEMgR2ACEbybesWDsx7BGGPgxQEgIJFjnkIEP3wGMpzwPrBNYyLKB34SSx7LMImrOfzvP9VUmjwJkHIv6AOCMMbFr5y5zyWvpxwTXIc_l88fbj9eAN-LItEHLkTw-vPFBRCST4GfzGIJTngme2EsT5e-2Fv-drmA88eWBaRHCnAnXgYmfJKkOfCiKPE9yYsN4moX4ksE_OmsCsZynXmg9570x1yAZCanM9kHrm1DBOcB1o8W2xi9IFg-LWmrabnPwdgT41VCBgQNKBy9jBAU0nvgsE9e0HZUs9eg2nL_L1Q7xsGppp1ULz41i5M04CkPVj42KlZ-7ZWWfPnVE-P3SRjztGet7i7if8qT-RZP36Xhw7geQARviqDnIzRgaGCggYkGxXKecX-2lBDEZXMM24E3aAV8MV2AL_lkCoJQPIKZ8B6KhALrvBDbbeXFca0mL4wxZFlmg5UF3GwHuFuwHCZnybTnrsLYhITON78aEm0NiCLbMTdNtUVQxkpQZHO5IFvJRakWlWjgZaUw7YUebCYVLRJRZkBDH4haH3Dva3rQ61YDG4MPLWpgEKvccZMl3GRoz3qvQaCxbwIx_X4IZA47FIG0k8ADqLj9CiruYIWKm2Q7FTfb3dVoVXG2sYrTzeuF7lAv-KxnrhWMZekdjY64XEzX_DZ6p8GfuW_-TOe74c-26beRO3pYuXNeQe4sZ_-HVqujmTDYutxR5Nrd57vuQ2sLUrdcTJNY8CbsrV_GBdY8eOAldyKZpT7_mCb-_G_K4c183dzVAi5kOUvLwVVcTwnpydbTr-WaKis5Z9uAy7boB3TCszrDi7l8StJHEHmSx35emGElm_XMkxfKOnajbFwCLngaelH4t7fcF5RNQL2uUhmfh38VyNQeuHihFp36DVp3sfULEy4KyJbfcQ1H23J1gHI3B8pwXAVQbAegyurZFShTu1hXgSIaBUMVBWMrO2iy14KxNy8YdXhWZ3haBWM7RJUH5AAFg2lbHuA1E-0sGDVQ7uZAqQrGru7ktgNqHwXjuG6rBK9dlXUWDG0ChZeBYqvK0lzMlIsJXlmN354_GVva73q5Od3lpio2Y3VVI4dU2VMCokidtgprP6TqYNLt2WvHvSq2dsrL7WvRrdU46cSkYa-m3a0W1HDKuXaqCTs-rs0t3VQr_1WnM_OtYdJtwS3tzp4LQOuiTScoDbtUFQCr7zU7CoAcH9mWjjtubQDntsIB8G7HLeMg_Ym9L6O0sAInF6vy5wi9Ug2LTn_LFGVFDRUs1luDpdtumxLKjkdAnX05paoAqnvSDvk8wvxXo6LTriryX43KEZ6q1Kh0m20z_U1b0TMd9gDt7ssnlfqv8knm7tZtb-aT2tdTo5cf_gsAAP__f1Si0A==
(52 rows)


Time: 1.704s total (execution 1.704s / network 0.001s)

Copy link
Collaborator Author

@DrewKimball DrewKimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @yuzefovich)


-- commits line 6 at r1:

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: it'd be good to explicitly mention the goroutines that SQL execution uses but which are currently not tracked by this work (since those should be relatively cheap). You mentioned on slack the async streamer goroutines, but I think the main goroutine of the hash router (colflow.HashRouter.Run) as well as the outbox (colrpc.Outbox.Run) are not tracked.

Done.


pkg/sql/colflow/stats.go line 111 at r1 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: do you think it'd be worth to extract an abstraction in this package that would combine two of the stop watches?

I embedded cpuStopWatch in StopWatch, let me know what you think.


pkg/sql/execinfrapb/component_stats.proto line 156 at r1 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: you could use something like optional util.optional.Duration cpu_time = 5 [(gogoproto.nullable) = false, (gogoproto.customname) = "CPUTime"] to get the same name. This seems a bit cleaner, but up to you (ditto for consumed_ru).

Agreed, done.


pkg/util/timeutil/cpustopwatch.go line 21 at r1 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

Hm, I think we needed the concurrency safety at some point, and I can't recall a PR that changed that. Do you think this is no longer needed?

I didn't think it would be necessary for the vectorized stats collector, but didn't think it through very deeply. Either way, it's behind a mutex now.

Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's nice to see that the execution time (the wall time) is slightly above the CPU time. :lgtm:

Reviewed 9 of 9 files at r2, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @DrewKimball)


pkg/sql/colflow/stats.go line 111 at r1 (raw file):

Previously, DrewKimball (Drew Kimball) wrote…

I embedded cpuStopWatch in StopWatch, let me know what you think.

👍

Copy link
Contributor

@j82w j82w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @DrewKimball)


pkg/sql/opt/exec/explain/output.go line 366 at r2 (raw file):

// AddCPUTime adds a top-level field for the cumulative cpu time spent by SQL
// execution.
func (ob *OutputBuilder) AddCPUTime(cpuTime time.Duration) {

Please add at least an e2e test verifying the output contains a valid CPU time.

@DrewKimball DrewKimball force-pushed the cpu branch 3 times, most recently from 2880abb to faa97cc Compare January 10, 2023 22:37
Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 9 of 21 files at r3, 14 of 14 files at r4, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @DrewKimball and @j82w)


-- commits line 20 at r4:
I'm confused about the mention of kvBatchFetcherHelper here - there are no modifications around that struct in this PR, and also the struct is not concerned with the time measurement on master either.


pkg/sql/instrumentation.go line 567 at r4 (raw file):

		ob.AddNetworkStats(queryStats.NetworkMessages, queryStats.NetworkBytesSent)
		ob.AddMaxDiskUsage(queryStats.MaxDiskUsage)
		if !ih.containsMutation && ih.vectorized && grunning.Supported() {

Remind me please why we're only doing this for vectorized plans? Due to not measuring the total execution time of a row-by-row processor?


pkg/sql/colflow/stats.go line 276 at r4 (raw file):

		// In order to account for SQL CPU time, we have to subtract the CPU time
		// spent while serving KV requests on a SQL goroutine.
		cpuTime -= vsc.kvReader.GetKVCPUTime()

I wonder whether we should also expose this new KVCPUTime measurement.

Currently, on master we track a single total "execution" time of an operator, and then for TableReader processor we show it as "KV time" whereas for all other readers as "execution time". Now that we also precisely track the actual "KV time", so we could be more precise with the terminology. As is, with the current change we have that "KV time" for ColBatchScans/TableReaders includes the "sql cpu time". I think it'd be better to hide the total execution time and, instead, show the KV time and the SQL CPU time (which, in theory, should add up to the total "execution time"). I'm under the impression that we can assume grunning library is supported, right? WDYT? (There is some context in a comment in the code above.)


pkg/sql/colflow/stats.go line 280 at r4 (raw file):

		s.Exec.ExecTime.Set(time)
	}
	if cpuTime > 0 && grunning.Supported() {

nit: should we add an assertion that cpuTime is non-negative?


pkg/sql/execstats/traceanalyzer.go line 382 at r4 (raw file):

	}

	for _, cpuTime := range a.nodeLevelStats.CPUTimeGroupedByNode {

nit: it might be good to modify TestQueryLevelStatsAccumulate to also test CPUTime.


pkg/sql/opt/exec/explain/output.go line 366 at r2 (raw file):

Previously, j82w (Jake) wrote…

Please add at least an e2e test verifying the output contains a valid CPU time.

+1 we should have at least some sanity check that a positive CPU time value is measured.

Also, can the test introduced as skipped in #89256 be now unskipped? My understanding is that now the RU estimate should be more precise, or perhaps we could incorporate an adjustment to the error margin depending on the measured CPU time?


pkg/util/timeutil/cpustopwatch.go line 21 at r4 (raw file):

// CPUStopWatch is a wrapper around cpuStopWatch that is safe to use
// concurrently. If CpuStopWatch is nil, all operations are no-ops and no

nit: s/CpuStopWatch/CPUStopWatch/.

Copy link
Collaborator Author

@DrewKimball DrewKimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @j82w and @yuzefovich)


-- commits line 20 at r4:

Previously, yuzefovich (Yahor Yuzefovich) wrote…

I'm confused about the mention of kvBatchFetcherHelper here - there are no modifications around that struct in this PR, and also the struct is not concerned with the time measurement on master either.

Sorry, that's a remnant from an earlier iteration. Done.


pkg/sql/instrumentation.go line 567 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

Remind me please why we're only doing this for vectorized plans? Due to not measuring the total execution time of a row-by-row processor?

It's because we measure the CPU time using vectorizedStatsCollectors. We can still measure it for row-wise processors; they just need to be wrapped in a columnarizer + statsCollector. I've expanded the comment.


pkg/sql/colflow/stats.go line 276 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

I wonder whether we should also expose this new KVCPUTime measurement.

Currently, on master we track a single total "execution" time of an operator, and then for TableReader processor we show it as "KV time" whereas for all other readers as "execution time". Now that we also precisely track the actual "KV time", so we could be more precise with the terminology. As is, with the current change we have that "KV time" for ColBatchScans/TableReaders includes the "sql cpu time". I think it'd be better to hide the total execution time and, instead, show the KV time and the SQL CPU time (which, in theory, should add up to the total "execution time"). I'm under the impression that we can assume grunning library is supported, right? WDYT? (There is some context in a comment in the code above.)

KVCPUTime only tracks the local KV CPU time - any CPU time spent on a remote node / separate goroutine is omitted. It seems like it would be confusing to display that as-is, what do you think? Do you think I should change the name of the field to make that more clear?


pkg/sql/colflow/stats.go line 280 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: should we add an assertion that cpuTime is non-negative?

Done.


pkg/sql/execstats/traceanalyzer.go line 382 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: it might be good to modify TestQueryLevelStatsAccumulate to also test CPUTime.

Good catch, done.


pkg/sql/opt/exec/explain/output.go line 366 at r2 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

+1 we should have at least some sanity check that a positive CPU time value is measured.

Also, can the test introduced as skipped in #89256 be now unskipped? My understanding is that now the RU estimate should be more precise, or perhaps we could incorporate an adjustment to the error margin depending on the measured CPU time?

Added an end-to-end test that verifies positive values for queries that output CPU time, and no output for queries with mutations.

This PR doesn't refactor the RU estimation to use grunning yet, and I think that should wait until I get it working with mutations.


pkg/util/timeutil/cpustopwatch.go line 21 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: s/CpuStopWatch/CPUStopWatch/.

Done.

Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 9 of 9 files at r5, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @DrewKimball and @j82w)


pkg/sql/instrumentation.go line 567 at r4 (raw file):

Previously, DrewKimball (Drew Kimball) wrote…

It's because we measure the CPU time using vectorizedStatsCollectors. We can still measure it for row-wise processors; they just need to be wrapped in a columnarizer + statsCollector. I've expanded the comment.

Ok, so effectively we haven't added the necessary execution statistics collection for purely row-by-row plans since it's not important enough. Consider leaving a TODO still even if we don't plan to add it immediately.


pkg/sql/colflow/stats.go line 276 at r4 (raw file):

It seems like it would be confusing to display that as-is, what do you think?

Yeah, fair point.

IIUC if we were to extend the CPU stop watches (that we are introducing in this PR to measure the KV on CPU time on the current goroutine) to also track the wall time, that wall time would be a proper KV time of an operator, right?

What I'm driving at is that KV time of TableReader on the DistSQL diagram now includes sql cpu time which can be confusing. It'd be nice to clarify this. Before this change, we only measured the wall "execution" time of the operator (which would be shown as execution time for non-readers and KV time for readers). This change introduces sql cpu time which measures on-core CPU time at the SQL level of an operator, excluding on-core KV time if any. This sql cpu time is included into the "execution time".

I think as the next step it'd be nice to explicitly measure the KV wall time a reader spent waiting for a KV request to evaluate (regardless whether it was done remotely or locally, in the same goroutine or not). We will then show this new "KV time" statistic as KV time on the diagram while showing "execution time" as execution time for all operators (i.e. we'd change it for KV readers), and it'll be such that KV (wall) time + sql cpu time <= execution (wall) time. Does this make sense?

It doesn't have to be in this PR but seems like a nice improvement with very little work after this change is done.


pkg/sql/opt/exec/explain/output_test.go line 179 at r5 (raw file):

	serverutils.InitTestServerFactory(server.TestServerFactory)
	tc := testcluster.StartTestCluster(t, numNodes, testClusterArgs)

nit: I think whenever we start a test cluster, we add

	defer leaktest.AfterTest(t)()
	defer log.Scope(t).Close(t)

to the top of the test.

Copy link
Collaborator Author

@DrewKimball DrewKimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @j82w and @yuzefovich)


pkg/sql/instrumentation.go line 567 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

Ok, so effectively we haven't added the necessary execution statistics collection for purely row-by-row plans since it's not important enough. Consider leaving a TODO still even if we don't plan to add it immediately.

Added a TODO to lift the restrictions on pure row-by-row plans and plans with mutations.


pkg/sql/colflow/stats.go line 276 at r4 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

It seems like it would be confusing to display that as-is, what do you think?

Yeah, fair point.

IIUC if we were to extend the CPU stop watches (that we are introducing in this PR to measure the KV on CPU time on the current goroutine) to also track the wall time, that wall time would be a proper KV time of an operator, right?

What I'm driving at is that KV time of TableReader on the DistSQL diagram now includes sql cpu time which can be confusing. It'd be nice to clarify this. Before this change, we only measured the wall "execution" time of the operator (which would be shown as execution time for non-readers and KV time for readers). This change introduces sql cpu time which measures on-core CPU time at the SQL level of an operator, excluding on-core KV time if any. This sql cpu time is included into the "execution time".

I think as the next step it'd be nice to explicitly measure the KV wall time a reader spent waiting for a KV request to evaluate (regardless whether it was done remotely or locally, in the same goroutine or not). We will then show this new "KV time" statistic as KV time on the diagram while showing "execution time" as execution time for all operators (i.e. we'd change it for KV readers), and it'll be such that KV (wall) time + sql cpu time <= execution (wall) time. Does this make sense?

It doesn't have to be in this PR but seems like a nice improvement with very little work after this change is done.

That sounds like a good idea. I'll add a TODO.


pkg/sql/opt/exec/explain/output_test.go line 179 at r5 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: I think whenever we start a test cluster, we add

	defer leaktest.AfterTest(t)()
	defer log.Scope(t).Close(t)

to the top of the test.

Done.

@DrewKimball DrewKimball force-pushed the cpu branch 2 times, most recently from 4508294 to 2373476 Compare January 19, 2023 08:11
This commit adds tracking for CPU time spent during SQL execution. The CPU
time is tracked at the operator granularity when statistics collection is
enabled, similar to execution time.

For now, the CPU time is only surfaced in the output of `EXPLAIN ANALYZE`
variants. A future PR will add support for logging this value in the
statement statistics.

Note that the main goroutines of the streamer, hash router, and outbox are
not currently tracked by this work. However, it is expected that these should
be relatively cheap, and shouldn't significantly impact the measurement.

Additionally, KV work is performed on a SQL goroutine in some cases
(e.g. when there is a single-range request for a local range). This makes
it necessary to track CPU time spent fulfilling KV requests on a SQL
goroutine so it can be subtracted from the total measured CPU time.
This logic is handled by the `cFetcher` and `rowFetcherStatCollector`
for the operators that only perform reads.

Finally, because mutations do not record stats, they currently have no way
to differentiate KV CPU time from SQL CPU time. For this reason, a plan that
contains mutations will not output CPU time.

Informs: cockroachdb#87213

Release note (sql change): CPU time spent during SQL execution is now visible
in the output of queries run with `EXPLAIN ANALYZE`. This measure does not
include CPU time spent while serving KV requests, and CPU time is not shown for
queries that perform mutations or for plans that aren't vectorized. This can be
useful for diagnosing performance issues and optimizing SQL queries.
Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 6 of 6 files at r6, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @j82w)

@DrewKimball
Copy link
Collaborator Author

TFTR!

bors r+

@craig
Copy link
Contributor

craig bot commented Jan 19, 2023

Build failed (retrying...):

@craig
Copy link
Contributor

craig bot commented Jan 19, 2023

Build succeeded:

@craig craig bot merged commit c8c4241 into cockroachdb:master Jan 19, 2023
@DrewKimball DrewKimball deleted the cpu branch January 19, 2023 22:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants