Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
71543: roachtest: disable prometheus for scrub r=otan a=otan

I'm unable to reproduce the error running roachtest myself, and it's the
only set of tests affected.

Resolves #71979

Release note: None

71927: sql/catalog/nstree: optimize with lazy initialization, pooling btrees r=ajwerner a=ajwerner

This change comes in response to
#66112 (comment).

The thrust of the change is to lazily initialize the data structures so that
when they are not used, they do not incur cost. Additionally, pool the
underlying tree so that when they are used, the allocations are effectively
free. This was originally deemed unimportant because the connExecutor
descs.Collection is long-lived. Unfortunately, others, as used for the
type resolve in distsql, are not.

```
name                                           old time/op    new time/op    delta
FlowSetup/vectorize=true/distribute=true-16       145µs ± 2%     139µs ± 3%   -3.94%  (p=0.000 n=19+20)
FlowSetup/vectorize=true/distribute=false-16      141µs ± 3%     134µs ± 2%   -4.66%  (p=0.000 n=18+19)
FlowSetup/vectorize=false/distribute=true-16      138µs ± 4%     132µs ± 4%   -4.23%  (p=0.000 n=20+20)
FlowSetup/vectorize=false/distribute=false-16     133µs ± 3%     127µs ± 3%   -4.41%  (p=0.000 n=20+19)

name                                           old alloc/op   new alloc/op   delta
FlowSetup/vectorize=true/distribute=true-16      38.1kB ± 3%    36.7kB ± 2%   -3.58%  (p=0.000 n=18+18)
FlowSetup/vectorize=true/distribute=false-16     36.2kB ± 0%    34.9kB ± 0%   -3.66%  (p=0.000 n=18+16)
FlowSetup/vectorize=false/distribute=true-16     42.6kB ± 0%    41.3kB ± 0%   -3.11%  (p=0.000 n=17+17)
FlowSetup/vectorize=false/distribute=false-16    41.0kB ± 0%    39.7kB ± 0%   -3.22%  (p=0.000 n=17+17)

name                                           old allocs/op  new allocs/op  delta
FlowSetup/vectorize=true/distribute=true-16         368 ± 0%       314 ± 0%  -14.67%  (p=0.000 n=16+17)
FlowSetup/vectorize=true/distribute=false-16        354 ± 0%       300 ± 0%  -15.25%  (p=0.000 n=17+17)
FlowSetup/vectorize=false/distribute=true-16        338 ± 1%       283 ± 1%  -16.13%  (p=0.000 n=19+19)
FlowSetup/vectorize=false/distribute=false-16       325 ± 0%       271 ± 0%  -16.62%  (p=0.000 n=18+18)
```

Omitting a release note because I'm doubtful this meaningfully shows up on its
own.

Release note: None

71960: ui: Added informative warning for insufficient privileges r=nathanstilwell a=nathanstilwell

<img width="972" alt="Screen Shot 2021-10-25 at 17 17 49" src="https://user-images.githubusercontent.com/397448/138772352-117456e1-10a8-475b-9b1c-79b0e2327558.png">

Previously, `/#/reports/nodes` would seem to hang in a loading state
indefinitely when a DB Console user wouldn't have admin privileges for
that endpoint. This was due to nodes data being empty from a 403
response.

An error is captured in the application state for this response, so
mapping this error as a prop to the component, the UI can distinguish
between the nodes data simply being empty and being empty due to a
restriction error.

<img width="827" alt="Screen Shot 2021-10-25 at 17 16 07" src="https://user-images.githubusercontent.com/397448/138772135-33321d34-9728-489b-8549-1c8bd55cf650.png">

Detecting this error, we render an `<InlineAlert />` to notify the user
of their lack of privileges.

Release note (ui change): All Nodes report notifies user is they need
more privileges to view information.

71966: ui: default filter on Transaction and Statement pages now exclude internals r=maryliag a=maryliag

Previously, the default value when no App filter was
selected on Transactions and Statements page, we were showing
all data, now we're excluding internal transaction/statements.

Fixes #70544

Release note (ui change): The default value when no App is selected
on Transactions and Statements filter is now excluding internal
Transactions and Statements.

71967: server,ui: transactionDetail page now shows statement stats scoped by transaction fingerprint ID r=matthewtodd a=Azhng

Previously, transactionDetail page shows statement statistics aggregated
across **all** executions of that statement fingerprint. This led to
confusing UX and the statement statistics were subsequently removed
from transactionDetail page.
This commit reintroduces statement statistics on transactionDetail
page. The statistics now are only aggregated across the statement
fingerprints that are part of the selected transaction fingerprint.

Resolves #59205 #65106

Release note (ui change): transactionDetail page now shows statement
statistics scoped by the transaction fingeprint.



https://user-images.githubusercontent.com/9267198/138916095-c91895d5-6f34-49b7-87dd-81f375b3f582.mov



71987: sql: skip TestSavepoints r=nkodali a=jbowens

Refs: #70220

Reason: flaky test

Generated by bin/skip-test.

Release justification: non-production code changes

Release note: None

Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Co-authored-by: Andrew Werner <awerner32@gmail.com>
Co-authored-by: Nathan Stilwell <nathanstilwell@cockroachlabs.com>
Co-authored-by: Marylia Gutierrez <marylia@cockroachlabs.com>
Co-authored-by: Azhng <archer.xn@gmail.com>
Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
  • Loading branch information
7 people committed Oct 26, 2021
7 parents 5f43585 + d23ccaa + 88df2b9 + 3b4274d + 8f804f8 + d798002 + 9c4a38c commit ff2877c
Show file tree
Hide file tree
Showing 24 changed files with 219 additions and 103 deletions.
5 changes: 3 additions & 2 deletions pkg/cmd/roachtest/tests/scrub.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,9 @@ func makeScrubTPCCTest(
}
return nil
},
Duration: length,
SetupType: usingImport,
DisablePrometheus: true,
Duration: length,
SetupType: usingImport,
})
},
}
Expand Down
23 changes: 16 additions & 7 deletions pkg/cmd/roachtest/tests/tpcc.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,14 +50,17 @@ const (
)

type tpccOptions struct {
Warehouses int
ExtraRunArgs string
ExtraSetupArgs string
Chaos func() Chaos // for late binding of stopper
During func(context.Context) error // for running a function during the test
Duration time.Duration // if zero, TPCC is not invoked
SetupType tpccSetupType
Warehouses int
ExtraRunArgs string
ExtraSetupArgs string
Chaos func() Chaos // for late binding of stopper
During func(context.Context) error // for running a function during the test
Duration time.Duration // if zero, TPCC is not invoked
SetupType tpccSetupType
// PrometheusConfig, if set, overwrites the default prometheus config settings.
PrometheusConfig *prometheus.Config
// DisablePrometheus will force prometheus to not start up.
DisablePrometheus bool
// WorkloadInstances contains a list of instances for
// workloads to run against.
// If unset, it will run one workload which talks to
Expand Down Expand Up @@ -1435,6 +1438,9 @@ func setupPrometheus(
if c.IsLocal() {
return nil, func() {}
}
if opts.DisablePrometheus {
return nil, func() {}
}
workloadNode := c.Node(c.Spec().NodeCount)
cfg = &prometheus.Config{
PrometheusNode: workloadNode,
Expand All @@ -1448,6 +1454,9 @@ func setupPrometheus(
},
}
}
if opts.DisablePrometheus {
t.Fatal("test has PrometheusConfig but DisablePrometheus was on")
}
if c.IsLocal() {
t.Skip("skipping test as prometheus is needed, but prometheus does not yet work locally")
return nil, func() {}
Expand Down
22 changes: 15 additions & 7 deletions pkg/server/combined_statement_stats.go
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,7 @@ func collectCombinedStatements(
query := fmt.Sprintf(
`SELECT
fingerprint_id,
transaction_fingerprint_id,
app_name,
aggregated_ts,
metadata,
Expand All @@ -122,7 +123,7 @@ func collectCombinedStatements(
FROM crdb_internal.statement_statistics
%s`, whereClause)

const expectedNumDatums = 7
const expectedNumDatums = 8

it, err := ie.QueryIteratorEx(ctx, "combined-stmts-by-interval", nil,
sessiondata.InternalExecutorOverride{
Expand Down Expand Up @@ -157,30 +158,37 @@ func collectCombinedStatements(
return nil, err
}

app := string(tree.MustBeDString(row[1]))
aggregatedTs := tree.MustBeDTimestampTZ(row[2]).Time
var transactionFingerprintID uint64
if transactionFingerprintID, err = sqlstatsutil.DatumToUint64(row[1]); err != nil {
return nil, err
}

app := string(tree.MustBeDString(row[2]))
aggregatedTs := tree.MustBeDTimestampTZ(row[3]).Time

var metadata roachpb.CollectedStatementStatistics
metadataJSON := tree.MustBeDJSON(row[3]).JSON
metadataJSON := tree.MustBeDJSON(row[4]).JSON
if err = sqlstatsutil.DecodeStmtStatsMetadataJSON(metadataJSON, &metadata); err != nil {
return nil, err
}

metadata.Key.App = app
metadata.Key.TransactionFingerprintID =
roachpb.TransactionFingerprintID(transactionFingerprintID)

statsJSON := tree.MustBeDJSON(row[4]).JSON
statsJSON := tree.MustBeDJSON(row[5]).JSON
if err = sqlstatsutil.DecodeStmtStatsStatisticsJSON(statsJSON, &metadata.Stats); err != nil {
return nil, err
}

planJSON := tree.MustBeDJSON(row[5]).JSON
planJSON := tree.MustBeDJSON(row[6]).JSON
plan, err := sqlstatsutil.JSONToExplainTreePlanNode(planJSON)
if err != nil {
return nil, err
}
metadata.Stats.SensitiveInfo.MostRecentPlanDescription = *plan

aggInterval := tree.MustBeDInterval(row[6]).Duration
aggInterval := tree.MustBeDInterval(row[7]).Duration

stmt := serverpb.StatementsResponse_CollectedStatementStatistics{
Key: serverpb.StatementsResponse_ExtendedStatementStatisticsKey{
Expand Down
2 changes: 0 additions & 2 deletions pkg/sql/catalog/descs/collection.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,6 @@ func makeCollection(
hydratedTables: hydratedTables,
virtual: makeVirtualDescriptors(virtualSchemas),
leased: makeLeasedDescriptors(leaseMgr),
synthetic: makeSyntheticDescriptors(),
uncommitted: makeUncommittedDescriptors(),
kv: makeKVDescriptors(codec),
temporary: makeTemporaryDescriptors(codec, temporarySchemaProvider),
}
Expand Down
3 changes: 1 addition & 2 deletions pkg/sql/catalog/descs/leased_descriptors.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@ func (m maxTimestampBoundDeadlineHolder) UpdateDeadline(

func makeLeasedDescriptors(lm leaseManager) leasedDescriptors {
return leasedDescriptors{
lm: lm,
cache: nstree.MakeMap(),
lm: lm,
}
}

Expand Down
4 changes: 0 additions & 4 deletions pkg/sql/catalog/descs/synthetic_descriptors.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,6 @@ type syntheticDescriptors struct {
descs nstree.Map
}

func makeSyntheticDescriptors() syntheticDescriptors {
return syntheticDescriptors{descs: nstree.MakeMap()}
}

func (sd *syntheticDescriptors) set(descs []catalog.Descriptor) {
sd.descs.Clear()
for _, desc := range descs {
Expand Down
9 changes: 0 additions & 9 deletions pkg/sql/catalog/descs/uncommitted_descriptors.go
Original file line number Diff line number Diff line change
Expand Up @@ -82,15 +82,6 @@ type uncommittedDescriptors struct {
descNames nstree.Set
}

func makeUncommittedDescriptors() uncommittedDescriptors {
ud := uncommittedDescriptors{
descs: nstree.MakeMap(),
descNames: nstree.MakeSet(),
}
ud.reset()
return ud
}

func (ud *uncommittedDescriptors) reset() {
ud.descs.Clear()
ud.descNames.Clear()
Expand Down
2 changes: 1 addition & 1 deletion pkg/sql/catalog/lease/name_cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import (
)

func makeNameCache() nameCache {
return nameCache{descriptors: nstree.MakeMap()}
return nameCache{}
}

// nameCache is a cache of descriptor name -> latest version mappings.
Expand Down
52 changes: 37 additions & 15 deletions pkg/sql/catalog/nstree/map.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ import (
// Map is a lookup structure for descriptors. It is used to provide
// indexed access to a set of entries either by name or by ID. The
// entries' properties are indexed; they must not change or else the
// index will be corrupted.
// index will be corrupted. Safe for use without initialization. Calling
// Clear will return memory to a sync.Pool.
type Map struct {
byID byIDMap
byName byNameMap
Expand All @@ -31,22 +32,10 @@ type Map struct {
// stop but no error will be returned.
type EntryIterator func(entry catalog.NameEntry) error

// MakeMap makes a new Map.
func MakeMap() Map {
const (
degree = 8 // arbitrary
initialNodes = 2 // one per tree
)
freeList := btree.NewFreeList(initialNodes)
return Map{
byName: byNameMap{t: btree.NewWithFreeList(degree, freeList)},
byID: byIDMap{t: btree.NewWithFreeList(degree, freeList)},
}
}

// Upsert adds the descriptor to the tree. If any descriptor exists in the
// tree with the same name or id, it will be removed.
func (dt *Map) Upsert(d catalog.NameEntry) {
dt.maybeInitialize()
if replaced := dt.byName.upsert(d); replaced != nil {
dt.byID.delete(replaced.GetID())
}
Expand All @@ -58,6 +47,7 @@ func (dt *Map) Upsert(d catalog.NameEntry) {
// Remove removes the descriptor with the given ID from the tree and
// returns it if it exists.
func (dt *Map) Remove(id descpb.ID) catalog.NameEntry {
dt.maybeInitialize()
if d := dt.byID.delete(id); d != nil {
dt.byName.delete(d)
return d
Expand All @@ -67,26 +57,58 @@ func (dt *Map) Remove(id descpb.ID) catalog.NameEntry {

// GetByID gets a descriptor from the tree by id.
func (dt *Map) GetByID(id descpb.ID) catalog.NameEntry {
if !dt.initialized() {
return nil
}
return dt.byID.get(id)
}

// GetByName gets a descriptor from the tree by name.
func (dt *Map) GetByName(parentID, parentSchemaID descpb.ID, name string) catalog.NameEntry {
if !dt.initialized() {
return nil
}
return dt.byName.getByName(parentID, parentSchemaID, name)
}

// Clear removes all entries.
// Clear removes all entries, returning any held memory to the sync.Pool.
func (dt *Map) Clear() {
if !dt.initialized() {
return
}
dt.byID.clear()
dt.byName.clear()
btreeSyncPool.Put(dt.byName.t)
btreeSyncPool.Put(dt.byID.t)
*dt = Map{}
}

// IterateByID iterates the descriptors by ID, ascending.
func (dt *Map) IterateByID(f EntryIterator) error {
if !dt.initialized() {
return nil
}
return dt.byID.ascend(f)
}

// Len returns the number of descriptors in the tree.
func (dt *Map) Len() int {
if !dt.initialized() {
return 0
}
return dt.byID.len()
}

func (dt Map) initialized() bool {
return dt != (Map{})
}

func (dt *Map) maybeInitialize() {
if dt.initialized() {
return
}
*dt = Map{
byName: byNameMap{t: btreeSyncPool.Get().(*btree.BTree)},
byID: byIDMap{t: btreeSyncPool.Get().(*btree.BTree)},
}
}
6 changes: 3 additions & 3 deletions pkg/sql/catalog/nstree/map_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,14 @@ import (
//
func TestMapDataDriven(t *testing.T) {
datadriven.Walk(t, "testdata/map", func(t *testing.T, path string) {
tr := MakeMap()
var tr Map
datadriven.RunTest(t, path, func(t *testing.T, d *datadriven.TestData) string {
return testMapDataDriven(t, d, tr)
return testMapDataDriven(t, d, &tr)
})
})
}

func testMapDataDriven(t *testing.T, d *datadriven.TestData, tr Map) string {
func testMapDataDriven(t *testing.T, d *datadriven.TestData, tr *Map) string {
switch d.Cmd {
case "add":
a := parseArgs(t, d, argID|argName, argParentID|argParentSchemaID)
Expand Down
39 changes: 25 additions & 14 deletions pkg/sql/catalog/nstree/set.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,36 +15,47 @@ import (
"github.com/google/btree"
)

// Set is a set of namespace keys.
// Set is a set of namespace keys. Safe for use without initialization.
// Calling Clear will return memory to a sync.Pool.
type Set struct {
t *btree.BTree
}

// MakeSet makes a Set of namespace keys.
func MakeSet() Set {
const (
degree = 8 // arbitrary
initialNodes = 1 // one per tree
)
freeList := btree.NewFreeList(initialNodes)
return Set{
t: btree.NewWithFreeList(degree, freeList),
}
}

// Add will add the relevant namespace key to the set.
func (s *Set) Add(components catalog.NameKey) {
s.maybeInitialize()
item := makeByNameItem(components).get()
item.v = item // the value needs to be non-nil
upsert(s.t, item)
}

// Contains will test whether the relevant namespace key was added.
func (s *Set) Contains(components catalog.NameKey) bool {
if !s.initialized() {
return false
}
return get(s.t, makeByNameItem(components).get()) != nil
}

// Clear will clear the set.
// Clear will clear the set, returning any held memory to the sync.Pool.
func (s *Set) Clear() {
if !s.initialized() {
return
}
clear(s.t)
btreeSyncPool.Put(s.t)
*s = Set{}
}

func (s *Set) maybeInitialize() {
if s.initialized() {
return
}
*s = Set{
t: btreeSyncPool.Get().(*btree.BTree),
}
}

func (s Set) initialized() bool {
return s != (Set{})
}
6 changes: 3 additions & 3 deletions pkg/sql/catalog/nstree/set_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,14 @@ import (
//
func TestSetDataDriven(t *testing.T) {
datadriven.Walk(t, "testdata/set", func(t *testing.T, path string) {
tr := MakeSet()
var tr Set
datadriven.RunTest(t, path, func(t *testing.T, d *datadriven.TestData) string {
return testSetDataDriven(t, d, tr)
return testSetDataDriven(t, d, &tr)
})
})
}

func testSetDataDriven(t *testing.T, d *datadriven.TestData, tr Set) string {
func testSetDataDriven(t *testing.T, d *datadriven.TestData, tr *Set) string {
switch d.Cmd {
case "add":
a := parseArgs(t, d, argName, argParentID|argParentSchemaID)
Expand Down
11 changes: 11 additions & 0 deletions pkg/sql/catalog/nstree/tree.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@
package nstree

import (
"sync"

"github.com/cockroachdb/cockroach/pkg/util/iterutil"
"github.com/google/btree"
)
Expand All @@ -23,6 +25,15 @@ type item interface {
value() interface{}
}

// degree is totally arbitrary, used for the btree.
const degree = 8

var btreeSyncPool = sync.Pool{
New: func() interface{} {
return btree.New(degree)
},
}

func upsert(t *btree.BTree, toUpsert item) interface{} {
if overwritten := t.ReplaceOrInsert(toUpsert); overwritten != nil {
overwrittenItem := overwritten.(item)
Expand Down
Loading

0 comments on commit ff2877c

Please sign in to comment.