forked from cockroachdb/cockroach
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ajwerner/wrapped descriptors #4
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
49475: opt: create library that determines how joins affect input rows r=andy-kimball a=DrewKimball Previously, there was no simple way to determine whether all rows from a join input will be included in its output, nor whether input rows will be duplicated by the join. This patch adds a library that constructs a Multiplicity struct for join operators. The Multiplicity can be queried for information about how a join will affect its input rows (e.g. duplicated, filtered and/or null-extended). The existing SimplifyLeftJoinWithFilters rule has been refactored to use this library. The Multiplicity library will also be useful for future join elimination and limit pushdown rules. Release note: None 49662: roachtest: don't run schema change workload on 19.2 releases r=spaskob a=spaskob Fixes cockroachdb#47024. Release note (bug fix): The schema change workload is meant for testing the behavior of schema changes on clusters with nodes with min version 19.2. It will deadlock on earlier versions. Co-authored-by: Drew Kimball <andrewekimball@gmail.com> Co-authored-by: Spas Bojanov <spas@cockroachlabs.com>
49611: searchpath: add pg_extension to searchpath.Iter() before public r=rohany a=otan Release note (sql change): When doing name resolution via search path, the `pg_extension` schema (containing tables such as `geometry_columns`, `geography_columns` and `spatial_ref_sys`) will have an attempted resolution before the `public` schema. This mimics PostGIS behavior where the aforementioned tables are in the public schema, and so by default are discoverable tables with a new CLI session. Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
When this test was introduced, the cluster version override was added to ensure that rangefeeds would be used. This was not necessary given the other testing knob which this change adopts. The problem with overrriding the version is that it can happen after the automatic version upgrade already occurs leading to a fatal error due to trying to downgrade the cluster version. Fixes cockroachdb#49632. Release note: None
Previously, there were two pairs of rules for join simplification; one to handle joins with filters, and one to handle cross joins. Since the filtersMatchAllLeftRows function can now handle cross joins, there is no reason to have two sets of rules. This patch folds the four join simplification rules into two rules: SimplifyLeftJoin and SimplifyRightJoin. Release note: None
49681: lease: fix flakey TestRangefeedUpdatesHandledProperlyInTheFaceOfRaces r=ajwerner a=ajwerner When this test was introduced, the cluster version override was added to ensure that rangefeeds would be used. This was not necessary given the other testing knob which this change adopts. The problem with overrriding the version is that it can happen after the automatic version upgrade already occurs leading to a fatal error due to trying to downgrade the cluster version. Release note: None Co-authored-by: Andrew Werner <ajwerner@cockroachlabs.com>
The work to remove the dependency has been done, so we're just waiting until we can remove the code. Until then, make sure this use of Gossip does not show up as a prominent caller to DeprecatedGossip any more, for easier bookkeeping. Release note: None
Visual regression testing is accomplished by `cypress-image-snapshot` plugin. The main issue solved with this change is how modules are loaded from custom `node_modules` location (./opt/node_modules). - for .ts files `baseUrl` and `typeRoots` are pointing to `./opt/node_modules` to resolve packages and typings. - Webpack configuration is extended with additional module resolver. - Plugin modules (which aren't processed by webpack) use helper function which appends path to `opt/node_modules` to required module name. Release note: None
When a `bin/*.d` file references an `*.eg.go` file, that file was required to exist or be buildable by the Makefile. Any commit which removed a `*.eg.go` file would violate this requirement causing the build to fail until the offending `bin/*.d` file was removed. In order to prevent this badness, a catchall `%.eg.go` rule is added which will force the target dependent on the `bin/%.d` file to be rebuilt. Fixes cockroachdb#49676 Release note: None
Prior to this commit, filtering a histogram with a multi-span index constraint could lead to incorrect results if the histogram column was part of the exact prefix of the constraint. This was due to the fact that the same value appeared in multiple spans, breaking the assumption of the histogram code that values in spans were always ordered and non-overlapping. For example, filtering a histogram on column b with the constraint /b/c/: [/1/2 - /1/2] [/1/4 - /1/4] would return an incorrect result, because the values in the matching histogram bucket would be counted twice. This commit fixes the problem by only considering a single span if the column is part of the exact prefix. Release note (performance improvement): Fixed a bug in the histogram logic in the optimizer which was causing an over-estimate for the cardinality of constrained index scans in some cases when multiple columns of the index were constrained. This problem was introduced early in the development for the 20.2 release so should not have ever been part of a release. The fix improves the optimizer's cardinality estimates so may result in better query plan selection.
Prior to this commit, the statistics_builder was incorrectly estimating the distinct count of columns that were only slightly constrained as part of an index constraint. For example, it was estimating based on constraints such as /a/b: [/1 - /5/6] or /a/b: [ - /5/6] that the distinct count of column b should be reduced by 2/3. However, in reality, we cannot assume anything about the distinct count of column b based on those two constraints. This commit fixes the estimate by only reducing the distinct count for columns that are part of the prefix of the constraint (columns for which all the spans have the same start and end values) or the first column after. Release note (performance improvement): Fixed the optimizer's distinct count estimate for columns constrained by an index constraint, which was too low in some cases. The fix improves the optimizer's cardinality estimates, which can lead to better query plan selection.
Prior to this commit, it was possible that the estimated distinct count of a multi-column statistic was larger than the product of the estimated distinct counts of all of its individual columns. This could happen because we reduce the distinct count of columns that are constrained by a predicate, but we don't update the distinct counts of any multi-column stats that contain that column. This is internally inconsistent and could lead to bad cardinality estimates. This commit fixes the problem by explicitly checking for such inconsistencies and fixing them after the stats for each operator are built or a column stat is calculated. If it is found that the estimated distinct count is too high, it is reset to the product of the distinct counts of its constituent columns. This commit also adds a check that the distinct count of the multi-column statistic is no smaller than the largest distinct count of its constituent columns (although I haven't yet found a test case where this was a problem). Release note (performance improvement): Fixed the optimizer's estimated distinct count for a multi-column statistic when all of the columns in the statistic are constrained by a filter predicate. The fix can lead to improved cardinality estimates, leading to better query plan selection in some cases.
49696: Makefile: fix handling of non-existent *.eg.go files r=petermattis a=petermattis When a `bin/*.d` file references an `*.eg.go` file, that file was required to exist or be buildable by the Makefile. Any commit which removed a `*.eg.go` file would violate this requirement causing the build to fail until the offending `bin/*.d` file was removed. In order to prevent this badness, a catchall `%.eg.go` rule is added which will force the target dependent on the `bin/%.d` file to be rebuilt. Fixes cockroachdb#49676 Release note: None Co-authored-by: Peter Mattis <petermattis@gmail.com>
This commit adds support for calculating selectivity from multi-column statistics. It changes selectivityFromDistinctCounts to have the following semantics: selectivityFromDistinctCounts calculates the selectivity of a filter by using estimated distinct counts of each constrained column before and after the filter was applied. We can perform this calculation in two different ways: (1) by treating the columns as completely independent, or (2) by assuming they are correlated. (1) Assuming independence between columns, we can calculate the selectivity by taking the product of selectivities of each constrained column. In the general case, this can be represented by the formula: ``` ┬-┬ ⎛ new_distinct(i) ⎞ selectivity = │ │ ⎜ --------------- ⎟ ┴ ┴ ⎝ old_distinct(i) ⎠ i in {constrained columns} ``` (2) If useMultiCol is true, we assume there is some correlation between columns. In this case, we calculate the selectivity using multi-column statistics. ``` ⎛ new_distinct({constrained columns}) ⎞ selectivity = ⎜ ----------------------------------- ⎟ ⎝ old_distinct({constrained columns}) ⎠ ``` This formula looks simple, but the challenge is that it is difficult to determine the correct value for new_distinct({constrained columns}) if each column is not constrained to a single value. For example, if new_distinct(x)=2 and new_distinct(y)=2, new_distinct({x,y}) could be 2, 3 or 4. We estimate the new distinct count as follows, using the concept of "soft functional dependency (FD) strength": ``` new_distinct({x,y}) = min_value + range * (1 - FD_strength_scaled) where min_value = max(new_distinct(x), new_distinct(y)) max_value = new_distinct(x) * new_distinct(y) range = max_value - min_value ⎛ max(old_distinct(x),old_distinct(y)) ⎞ FD_strength = ⎜ ------------------------------------ ⎟ ⎝ old_distinct({x,y}) ⎠ ⎛ max(old_distinct(x), old_distinct(y)) ⎞ min_FD_strength = ⎜ ------------------------------------- ⎟ ⎝ old_distinct(x) * old_distinct(y) ⎠ ⎛ FD_strength - min_FD_strength ⎞ FD_strength_scaled = ⎜ ----------------------------- ⎟ ⎝ 1 - min_FD_strength ⎠ ``` Suppose that old_distinct(x)=100 and old_distinct(y)=10. If x and y are perfectly correlated, old_distinct({x,y})=100. Using the example from above, new_distinct(x)=2 and new_distinct(y)=2. Plugging in the values into the equation, we get: ``` FD_strength_scaled = 1 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 1) = 2 ``` If x and y are completely independent, however, old_distinct({x,y})=1000. In this case, we get: ``` FD_strength_scaled = 0 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 0) = 4 ``` Note that even if useMultiCol is true and we calculate the selectivity based on equation (2) above, we still want to take equation (1) into account. This is because it is possible that there are two predicates that each have selectivity s, but the multi-column selectivity is also s. In order to ensure that the cost model considers the two predicates combined to be more selective than either one individually, we must give some weight to equation (1). Therefore, instead of equation (2) we actually return the following selectivity: ``` selectivity = (1 - w) * (eq. 1) + w * (eq. 2) ``` where w currently set to 0.9. This selectivity will be used later to update the row count and the distinct count for the unconstrained columns. Fixes cockroachdb#34422 Release note (performance improvement): Added support for calculating the selectivity of filter predicates in the optimizer using multi-column statistics. This improves the cardinality estimates of the optimizer when a query has filter predicates constraining multiple columns. As a result, the optimizer may choose a better query plan in some cases.
This commit removes building muslc from the release process. It is causing issues with the GEOS build, and we're not confident it is worth spending the time to debug it. * Delete mentions of muslc from publishing artifacts / provisional artifacts. * Delete muslc from the Docker image. * Remove Makefile items that are musl-c related. * Unfortunately, I am not able to push the Docker image to cockroachdb/builder. I'm not sure that's important atm. Release note (general change): Removed the publication of muslc CockroachDB builds.
49560: geo/geoproj: initial framework to import the PROJ library r=petermattis,sumeerbhola a=otan This commit introduces PROJ v4.9.3 to CockroachDB's compilation. We cannot use a later version as we would like to exclude installing sqlite3 as a pre-requisite for now. To get this to work, `bin/execgen` must depend on the `LIBPROJ` target to compile, and removed the c-deps ban on `pkg/geo/geoproj`. Added some stub functions (non-finalized) that test whether functionality works as expected. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
49656: sql: fix panic when attempting to clone a table with a UDT r=rohany a=rohany Due to how the type metadata was represented, there could have been structs with unimplemented fields in the `TypeMeta`'s `UserDefinedTypeName`. This would cause protobuf to panic when attempting to perform reflection into `types.T` when cloning. Release note: None 49699: Fixed code rot in builder README r=aaron-crl a=aaron-crl Updated Dockerfile path in README. Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu> Co-authored-by: Aaron Blum <aaron@cockroachlabs.com>
Add created_by_type and created_by_id columns, along with the index over these columns, to the system.jobs table. Add sql migration code to migrate old definition to the new one. See https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20200414_scheduled_jobs.md Release Notes: None
This change adds VolatilityLeakProof and changes a few hashing builtins to use it (for consistency with postgres). In postgres, `proleakproof` is a flag separate from `provolatile`, but all instances of leak-proof operators are immutable. We also change the volatility to a simple index since it no longer maps to the postgres volatility directly. In the optimizer, we will need to have a bitfield for which types of volatilities are contained in an expression so having them be small consecutive integers is desirable. Release note: None
This commit adds a new package for dealing with expressions within table schemas, such as check constraints, computed columns, and partial index predicates. The utility provided previously by the following functions have been moved and refactored into this new package: dequalifyColumnRefs - Now provided by DequalifyColumnRefs. makeCheckConstraint - Now provided by CheckConstraintBuilder. validateComputedColumn - Now provided by ComputedColumnValidator. validateIndexPredicate - Now provided by IndexPredicateValidator. In addition, several helper functions were refactored into the new package, including generateMaybeDuplicateNameForCheckConstraint, generateNameForCheckConstraint, iterColDescriptorsInExpr, and replaceVars. dummyColumnItems were also moved into the new package. Release note: None
49685: build: remove muslc from the building process r=otan a=otan This commit removes building muslc from the release process. It is causing issues with the GEOS build, and we're not confident it is worth spending the time to debug it. * Delete mentions of muslc from publishing artifacts / provisional artifacts. * Delete muslc from the Docker image. * Remove Makefile items that are musl-c related. * Unfortunately, I am not able to push the Docker image to cockroachdb/builder. I'm not sure that's important atm. Release note (general change): Removed the publication of muslc CockroachDB builds. Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
49134: opt: add support for calculating selectivity from multi-column stats r=rytaft a=rytaft **opt: fix bug with histograms and multi-span index constraints** Prior to this commit, filtering a histogram with a multi-span index constraint could lead to incorrect results if the histogram column was part of the exact prefix of the constraint. This was due to the fact that the same value appeared in multiple spans, breaking the assumption of the histogram code that values in spans were always ordered and non-overlapping. For example, filtering a histogram on column b with the constraint `/b/c/: [/1/2 - /1/2] [/1/4 - /1/4]` would return an incorrect result, because the values in the matching histogram bucket would be counted twice. This commit fixes the problem by only considering a single span if the column is part of the exact prefix. Release note (performance improvement): Fixed a bug in the histogram logic in the optimizer which was causing an over-estimate for the cardinality of constrained index scans in some cases when multiple columns of the index were constrained. This problem was introduced early in the development for the 20.2 release so should not have ever been part of a release. The fix improves the optimizer's cardinality estimates so may result in better query plan selection. **opt: fix distinct count estimates for index constraint columns** Prior to this commit, the statistics_builder was incorrectly estimating the distinct count of columns that were only slightly constrained as part of an index constraint. For example, it was estimating based on constraints such as `/a/b: [/1 - /5/6]` or `/a/b: [ - /5/6]` that the distinct count of column b should be reduced by 2/3. However, in reality, we cannot assume anything about the distinct count of column b based on those two constraints. This commit fixes the estimate by only reducing the distinct count for columns that are part of the prefix of the constraint (columns for which all the spans have the same start and end values) or the first column after. Release note (performance improvement): Fixed the optimizer's distinct count estimate for columns constrained by an index constraint, which was too low in some cases. The fix improves the optimizer's cardinality estimates, which can lead to better query plan selection. **opt: ensure multi-col stats are consistent with single-col stats** Prior to this commit, it was possible that the estimated distinct count of a multi-column statistic was larger than the product of the estimated distinct counts of all of its individual columns. This could happen because we reduce the distinct count of columns that are constrained by a predicate, but we don't update the distinct counts of any multi-column stats that contain that column. This is internally inconsistent and could lead to bad cardinality estimates. This commit fixes the problem by explicitly checking for such inconsistencies and fixing them after the stats for each operator are built or a column stat is calculated. If it is found that the estimated distinct count is too high, it is reset to the product of the distinct counts of its constituent columns. This commit also adds a check that the distinct count of the multi-column statistic is no smaller than the largest distinct count of its constituent columns (although I haven't yet found a test case where this was a problem). Release note (performance improvement): Fixed the optimizer's estimated distinct count for a multi-column statistic when all of the columns in the statistic are constrained by a filter predicate. The fix can lead to improved cardinality estimates, leading to better query plan selection in some cases. **opt: add support for calculating selectivity from multi-column stats** This commit adds support for calculating selectivity from multi-column statistics. It changes `selectivityFromDistinctCounts` to have the following semantics: `selectivityFromDistinctCounts` calculates the selectivity of a filter by using estimated distinct counts of each constrained column before and after the filter was applied. We can perform this calculation in two different ways: (1) by treating the columns as completely independent, or (2) by assuming they are correlated. (1) Assuming independence between columns, we can calculate the selectivity by taking the product of selectivities of each constrained column. In the general case, this can be represented by the formula: ``` ┬-┬ ⎛ new_distinct(i) ⎞ selectivity = │ │ ⎜ --------------- ⎟ ┴ ┴ ⎝ old_distinct(i) ⎠ i in {constrained columns} ``` (2) If `useMultiCol` is true, we assume there is some correlation between columns. In this case, we calculate the selectivity using multi-column statistics. ``` ⎛ new_distinct({constrained columns}) ⎞ selectivity = ⎜ ----------------------------------- ⎟ ⎝ old_distinct({constrained columns}) ⎠ ``` This formula looks simple, but the challenge is that it is difficult to determine the correct value for new_distinct({constrained columns}) if each column is not constrained to a single value. For example, if new_distinct(x)=2 and new_distinct(y)=2, new_distinct({x,y}) could be 2, 3 or 4. We estimate the new distinct count as follows, using the concept of "soft functional dependency (FD) strength": ``` new_distinct({x,y}) = min_value + range * (1 - FD_strength_scaled) where min_value = max(new_distinct(x), new_distinct(y)) max_value = new_distinct(x) * new_distinct(y) range = max_value - min_value ⎛ max(old_distinct(x),old_distinct(y)) ⎞ FD_strength = ⎜ ------------------------------------ ⎟ ⎝ old_distinct({x,y}) ⎠ ⎛ max(old_distinct(x), old_distinct(y)) ⎞ min_FD_strength = ⎜ ------------------------------------- ⎟ ⎝ old_distinct(x) * old_distinct(y) ⎠ ⎛ FD_strength - min_FD_strength ⎞ FD_strength_scaled = ⎜ ----------------------------- ⎟ ⎝ 1 - min_FD_strength ⎠ ``` Suppose that old_distinct(x)=100 and old_distinct(y)=10. If x and y are perfectly correlated, old_distinct({x,y})=100. Using the example from above, new_distinct(x)=2 and new_distinct(y)=2. Plugging in the values into the equation, we get: ``` FD_strength_scaled = 1 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 1) = 2 ``` If x and y are completely independent, however, old_distinct({x,y})=1000. In this case, we get: ``` FD_strength_scaled = 0 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 0) = 4 ``` Note that even if `useMultiCol` is true and we calculate the selectivity based on equation (2) above, we still want to take equation (1) into account. This is because it is possible that there are two predicates that each have selectivity s, but the multi-column selectivity is also s. In order to ensure that the cost model considers the two predicates combined to be more selective than either one individually, we must give some weight to equation (1). Therefore, instead of equation (2) we actually return the following selectivity: ``` selectivity = (1 - w) * (eq. 1) + w * (eq. 2) ``` where w currently set to 0.9. This selectivity is used update the row count and the distinct count for the unconstrained columns. Fixes cockroachdb#34422 Release note (performance improvement): Added support for calculating the selectivity of filter predicates in the optimizer using multi-column statistics. This improves the cardinality estimates of the optimizer when a query has filter predicates constraining multiple columns. As a result, the optimizer may choose a better query plan in some cases. Co-authored-by: Rebecca Taft <becca@cockroachlabs.com>
a41e4a *: Update level_checker to account for sublevels b14003 internal/manifest: Relax L0 CheckOrdering invariants for flush splits 9249ee db: add max memtable and max batch size checks bb2f85 vfs: return nil from RemoveAll for nonexistent paths eb5d41 db: use compaction picker's scores directly in (*db).Metrics 65313e cmd/pebble: sample read amp throughout compaction benchmarks Release note: None
Use the new storage/FS.RemoveAll method to clean up temporary directories. Release note: none
49444: sqlbase: Add created_by columns to system.jobs r=miretskiy a=miretskiy Informs cockroachdb#49346 Add created_by_type and created_by_id columns, along with the index over these columns, to the system.jobs table. Add sql migration code to migrate old definition to the new one. See https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20200414_scheduled_jobs.md Release Notes: None 49680: sql: add LeakProof volatility r=RaduBerinde a=RaduBerinde This change adds VolatilityLeakProof and changes a few hashing builtins to use it (for consistency with postgres). In postgres, `proleakproof` is a flag separate from `provolatile`, but all instances of leak-proof operators are immutable. We also change the volatility to a simple index since it no longer maps to the postgres volatility directly. In the optimizer, we will need to have a bitfield for which types of volatilities are contained in an expression so having them be small consecutive integers is desirable. Release note: None ![image](https://user-images.githubusercontent.com/16544120/83211087-13631d00-a111-11ea-9a10-cae758a4cfbf.png) 49713: vendor: bump pebble to a41e4ac9 r=jbowens a=jbowens a41e4a *: Update level_checker to account for sublevels b14003 internal/manifest: Relax L0 CheckOrdering invariants for flush splits 9249ee db: add max memtable and max batch size checks bb2f85 vfs: return nil from RemoveAll for nonexistent paths eb5d41 db: use compaction picker's scores directly in (*db).Metrics 65313e cmd/pebble: sample read amp throughout compaction benchmarks Co-authored-by: Yevgeniy Miretskiy <yevgeniy@cockroachlabs.com> Co-authored-by: Radu Berinde <radu@cockroachlabs.com> Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
Noticed while working on multi-tenancy and confirmed that this was an issue manually. ``` root@127.0.0.1:51841/movr> set experimental_enable_enums=true; SET Time: 259µs root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 62 (1 row) Time: 2.251ms root@127.0.0.1:51841/movr> CREATE TYPE t AS ENUM (); CREATE TYPE Time: 1.698ms root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 63 (1 row) Time: 1.815ms ``` I tried to write a logic test that did essentially this, but that ended up being pretty frustrating to get right and the end result seemed too fragile to be worth it.
49667: sql: don't split ranges on user-defined types or schemas r=nvanbenschoten a=nvanbenschoten Noticed while working on multi-tenancy and confirmed that this was an issue manually. ``` root@127.0.0.1:51841/movr> set experimental_enable_enums=true; SET Time: 259µs root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 62 (1 row) Time: 2.251ms root@127.0.0.1:51841/movr> CREATE TYPE t AS ENUM (); CREATE TYPE Time: 1.698ms root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 63 (1 row) Time: 1.815ms ``` I tried to write a logic test that did essentially this, but that ended up being pretty frustrating to get right and the end result seemed too fragile to be worth it. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Release note (sql change): Previously, using NULL (or tuples containing NULLs) as the left-hand-side of an IN operator would not typecheck unless the NULLs were explicitly casted. Now, the casting is not required.
For GROUP BY to work, there needs to be a way of encoding / decoding table keys for each column type. As such, define EncodeTableKey and DecodeTableKey for geo types to be their underlying protobuf representation (we currently have no better way!). Note that this does not yet allow us to support PKs for geospatial (we could, but we need some more iteration beforehand). I've shifted the array descriptor markers around as they seem to be new in 20.2 as well -- so they should be stable in the upcoming alpha release. Release note: None
49419: colexec: add binary overloads on datum-backed types r=yuzefovich a=yuzefovich **tree, colexec: preliminary steps for datum-backed binary overloads** This commit does a few things that are preliminary to adding support of binary operations on datum-backed vectors: 1. it exports `fn` field of `tree.BinaryExpr` - we will be using this function to evaluate the binary operation on datum-backed vectors 2. it renames `decimalOverloadScratch` to `overloadHelper` and plumbs it from the planning stage for projection and selection operators. This struct will be used to supply the resolved `BinaryExpr.Fn` function (and possibly other information) later 3. it cleans up `vectorize_types` logic test a bit since we now support all types. Release note: None **colexec: add support for binary operations on datum-backed types** This commit adds the support for 4 binary operations when at least one of the arguments is datum-backed vector and the result is datum-backed vector as well. We reuse the same function that row engine uses when evaluating such expressions, and the function is plumbed during the planning via `overloadHelper` struct. Native (i.e. not datum-backed) arguments are converted to datums at runtime. Release note: None Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Previously, the optimizer couldn't eliminate a join in the input of a Project when the removal would have no effect on the output of the Project operator. This patch adds rules to replace a join with one of its input relations when the following conditions are met: 1. The Project doesn't use any columns from the discarded input. 2. The Join doesn't eliminate or duplicate rows from the preserved input. Fixes cockroachdb#49149 Release note (sql change): The optimizer can now remove an unnecessary join from the input of a Project operator.
Previously, the optimizer could not push a limit into an InnerJoin. This patch replaces PushLimitIntoLeftJoin with two rules which perform the same function as well as handle the InnerJoin case. A limit can be pushed into a given side of an InnerJoin when rows from that side are guaranteed to be preserved by the join. Release note (sql change): improve performance for queries with a limit on a join that is guaranteed to preserve input rows.
49788: opt: add rules to eliminate a do-nothing join under a Project r=DrewKimball a=DrewKimball Previously, the optimizer couldn't eliminate a join in the input of a Project when the removal would have no effect on the output of the Project operator. This patch adds rules to replace a join with one of its input relations when the following conditions are met: 1. The Project doesn't use any columns from the "discarded" input. 2. The Join doesn't eliminate or duplicate rows from the "preserved" input. Fixes cockroachdb#49149 Release note (sql change): The optimizer can now remove an unnecessary join from the input of a Project operator. Co-authored-by: Drew Kimball <andrewekimball@gmail.com>
49802: opt: modify limit pushdown rule to support inner joins r=DrewKimball a=DrewKimball Previously, the optimizer could not push a limit into an InnerJoin. This patch replaces PushLimitIntoLeftJoin with two rules which perform the same function as well as handle the InnerJoin case. A limit can be pushed into a given side of an InnerJoin when rows from that side are guaranteed to be preserved by the join. Release note (sql change): improve performance for queries with a limit on a join that is guaranteed to preserve input rows. Co-authored-by: Drew Kimball <drewk@cockroachlabs.com>
The `TestLogScope` infrastructure is meant to force-configure logging to go to a temporary directory. It was designed with the assumption it would be used "outside" of server initialization, i.e. non-concurrently with `log` API calls. However, at least one test (`TestHBAAuthenticationRules`) violates the assumption and switches the output directory while a server is running. After checking it appears that the behavior is sound, but there was one racy access remaining: the write to `os.Stderr` during the redirect. This had been racy ever since TestLogScope was introduced; however it was only visible on windows because the unix builds were missing the assignment to `os.Stderr`. A recent fix to make the builds consistent revealed the race to our CI checker. This patch fixes it by adding the missing mutex. Release note: None
Only the sub-directory `logs` is mounted inside the docker image run by TestDockerCLI; ensure more stuff is saved there. Release note: None
Release note: None
49936: log: add a mutex for stderr redirection r=tbg a=knz Fixes cockroachdb#49930 The `TestLogScope` infrastructure is meant to force-configure logging to go to a temporary directory. It was designed with the assumption it would be used "outside" of server initialization, i.e. non-concurrently with `log` API calls. However, at least one test (`TestHBAAuthenticationRules`) violates the assumption and switches the output directory while a server is running. After checking it appears that the behavior is sound, but there was one racy access remaining: the write to `os.Stderr` during the redirect. This had been racy ever since TestLogScope was introduced; however it was only visible on windows because the unix builds were missing the assignment to `os.Stderr`. A recent fix to make the builds consistent revealed the race to our CI checker. This patch fixes it by adding the missing mutex. Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
49522: sql: make `SHOW DATABASES` not need to access schemas r=jordanlewis a=rohany This PR fixes a performance regression in the `SHOW DATABASES` statement. The regression was caused by the table backing the show databases statement performing a lookup of each databases' schemas. Release note (performance improvement): Fix a performance regression in the `SHOW DATABASES` command introduced in 20.1. Release note (sql change): Add the `crdb_internal.databases` virtual table. Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
49858: sql: use a separate type for pgcodes r=knz a=rohany Fixes cockroachdb#49694. This PR uses a separate wrapped type for pgcodes to ensure that arbitrary strings are not passed to the pgerror functions in place of pgcodes. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
49937: cli/interactive_tests: misc improvements r=rohany a=knz Found while working on cockroachdb#48051 Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Work for cockroachdb#48728. This PR adds some tests to verify that different uses of CTAS work as expected when user defined types are in the mix. Release note: None
Work for cockroachdb#48728. This PR ensures that check constraints that use enums can be added and validated. It also updates the sites where check validations fail to ensure that the displayed error message has been deserialized from the internal representation. Release note: None
49944: sql: add tests for use of CTAS with enum types r=jordanlewis a=rohany Work for cockroachdb#48728. This PR adds some tests to verify that different uses of CTAS work as expected when user defined types are in the mix. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
49945: sql: enable adding check constraints that use enums r=jordanlewis a=rohany Work for cockroachdb#48728. This PR ensures that check constraints that use enums can be added and validated. It also updates the sites where check validations fail to ensure that the displayed error message has been deserialized from the internal representation. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
This lifts NameInfo, Version, and Modification time to a new message that lives on the Descriptor rather than inside of the TableDescriptor. It does not yet adopt these fields. Release note: None
For now, move the Descriptor interface down into sqlbase as part of the effort to elimate sqlbase.DescriptorProto. Release note: None
In preparation for merging that interface with DescriptorInterface. Release note: None
This change is large but is largely mechanical. The basic idea is that we want to stop passing around raw pointers to protobuf structs. Instead we'd like to pass around higher-level wrappers which implement interfaces. There's some room for debate about the precise nature of Mutable/Immutable wrappers for descriptor types but over time, hopefully, we'll move descriptor manipulation behind clear interfaces. It's worth noting that this commit is a half-step. It began, somewhat unfortunately, by introducing the TypeDescriptor wrappers and then goes all the way to deleting the `DescriptorProto` interface and unexporting `WrapDescriptor`. Release note: None
…escriptor Next we'll stop having the raw descriptor implement that interface. Release note: None
This is a large but also mostly mechanical change. There's room for improvement on the uniformity of it all but let's leave that be for now as I'm getting tired and need to keep making progress. It'll be easier to clean it up later. In particular, I've increasingly been feeling that an interface-based approach to these descriptors would be worthwhile and that that would be easy to accomplish for database but alas. That will have to be another big and mechanical change. Release note: None
This is in anticipation of removing those fields from the proto. This commit for also lifts the "unwrapping" of descriptors into catalogkv and eliminates UnwrapDescriptor. Release note: None
This commit adopts the new fields. TODO(ajwerner): Revisit this commit, I think it gets a lot of things wrong. Release note: None
Release note: None
These fields on now on DescriptorMeta. Release note: None
This commit adopts the interface methods for accessing the ID and Name of a DatabaseDescriptor. The reworking in this commit of previous changes also in this PR is annoying, I'm sorry to the reviewers. I'm also increasingly sensing the urgency of eschewing the concrete descriptor wrappers in favor of interfaces but I'm not going to try to deal with that in this PR. It is in many ways more of a mess than what was there but it's laying the foundations. Release note: None
In anticipation of lifting methods to the wrapper types. Release note: None
Release note: None
ajwerner
pushed a commit
that referenced
this pull request
Apr 29, 2022
…db#80762 cockroachdb#80773 79911: opt: refactor and test lookup join key column and expr generation r=mgartner a=mgartner #### opt: simplify fetching outer column in CustomFuncs.findComputedColJoinEquality Previously, `CustomFuncs.findComputedColJoinEquality` used `CustomFuncs.OuterCols` to retrieve the outer columns of computed column expressions. `CustomFuncs.OuterCols` returns the cached outer columns in the expression if it is a `memo.ScalarPropsExpr`, and falls back to calculating the outer columns with `memo.BuildSharedProps` otherwise. Computed column expressions are never `memo.ScalarPropsExpr`s, so we use just use `memo.BuildSharedProps` directly. Release note: None #### opt: make RemapCols a method on Factory instead of CustomFuncs Release note: None #### opt: use partial-index-reduced filters when building lookup expressions This commit makes a minor change to `generateLookupJoinsImpl`. Previously, equality filters were extracted from the original `ON` filters. Now they are extracted from filters that have been reduced by partial index implication. This has no effect on behavior because equality filters that reference columns in two tables cannot exist in partial index predicates, so they will never be eliminated during partial index implication. Release note: None #### opt: moves some lookup join generation logic to lookup join package This commit adds a new `lookupjoin` package. Logic for determining the key columns and lookup expressions for lookup joins has been moved to `lookupJoin.ConstraintBuilder`. The code was moved with as few changes as possible, and the behavior does not change in any way. This move will make it easier to test this code in isolation in the future, and allow for further refactoring. Release note: None #### opt: generalize lookupjoin.ConstraintBuilder API This commit makes the lookupjoin.ConstraintBuilder API more general to make unit testing easier in a future commit. Release note: None #### opt: add data-driven tests for lookupjoin.ConstraintBuilder Release note: None #### opt: add lookupjoin.Constraint struct The `lookupjoin.Constraint` struct has been added to encapsulate multiple data structures that represent a strategy for constraining a lookup join. Release note: None 80511: pkg/cloud/azure: Support specifying Azure environments in storage URLs r=adityamaru a=nlowe-sx The Azure Storage cloud provider learned a new parameter, AZURE_ENVIRONMENT, which specifies which azure environment the storage account in question belongs to. This allows cockroach to backup and restore data to Azure Storage Accounts outside the main Azure Public Cloud. For backwards compatibility, this defaults to "AzurePublicCloud" if AZURE_ENVIRONMENT is not specified. Fixes cockroachdb#47163 ## Verification Evidence I spun up a single node cluster: ``` nlowe@nlowe-z4l:~/projects/github/cockroachdb/cockroach [feat/47163-azure-storage-support-multiple-environments L|✚ 2] [🗓 2022-04-22 08:25:49] $ bazel run //pkg/cmd/cockroach:cockroach -- start-single-node --insecure WARNING: Option 'host_javabase' is deprecated WARNING: Option 'javabase' is deprecated WARNING: Option 'host_java_toolchain' is deprecated WARNING: Option 'java_toolchain' is deprecated INFO: Invocation ID: 11504a98-f767-413a-8994-8f92793c2ecf INFO: Analyzed target //pkg/cmd/cockroach:cockroach (0 packages loaded, 0 targets configured). INFO: Found 1 target... Target //pkg/cmd/cockroach:cockroach up-to-date: _bazel/bin/pkg/cmd/cockroach/cockroach_/cockroach INFO: Elapsed time: 0.358s, Critical Path: 0.00s INFO: 1 process: 1 internal. INFO: Build completed successfully, 1 total action INFO: Build completed successfully, 1 total action * * WARNING: ALL SECURITY CONTROLS HAVE BEEN DISABLED! * * This mode is intended for non-production testing only. * * In this mode: * - Your cluster is open to any client that can access any of your IP addresses. * - Intruders with access to your machine or network can observe client-server traffic. * - Intruders can log in without password and read or write any data in the cluster. * - Intruders can consume all your server's resources and cause unavailability. * * * INFO: To start a secure server without mandating TLS for clients, * consider --accept-sql-without-tls instead. For other options, see: * * - https://go.crdb.dev/issue-v/53404/dev * - https://www.cockroachlabs.com/docs/dev/secure-a-cluster.html * * * WARNING: neither --listen-addr nor --advertise-addr was specified. * The server will advertise "nlowe-z4l" to other nodes, is this routable? * * Consider using: * - for local-only servers: --listen-addr=localhost * - for multi-node clusters: --advertise-addr=<host/IP addr> * * CockroachDB node starting at 2022-04-22 15:25:55.461315977 +0000 UTC (took 2.1s) build: CCL unknown @ (go1.17.6) webui: http://nlowe-z4l:8080/ sql: postgresql://root@nlowe-z4l:26257/defaultdb?sslmode=disable sql (JDBC): jdbc:postgresql://nlowe-z4l:26257/defaultdb?sslmode=disable&user=root RPC client flags: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach <client cmd> --host=nlowe-z4l:26257 --insecure logs: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/logs temp dir: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/cockroach-temp4100501952 external I/O path: /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/extern store[0]: path=/home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data storage engine: pebble clusterID: bb3942d7-f241-4d26-aa4a-1bd0d6556e4d status: initialized new cluster nodeID: 1 ``` I was then able to view the contents of a backup hosted in an azure government storage account: ``` root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***&AZURE_ENVIRONMENT=AzureUSGovernmentCloud'] WHERE object_type = 'database'; object_name ------------------------------------------ example_database ... (17 rows) Time: 5.859632889s ``` Omitting the `AZURE_ENVIRONMENT` parameter, we can see cockroach defaults to the public cloud where my storage account does not exist: ``` root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***'] WHERE object_type = 'database'; ERROR: reading previous backup layers: unable to list files for specified blob: Get "https://account.blob.core.windows.net/container?comp=list&delimiter=path%2Fto%2Fbackup&restype=container&timeout=61": dial tcp: lookup account.blob.core.windows.net on 8.8.8.8:53: no such host ``` ## Tests Two new tests are added to verify that the storage account URL is correctly built from the provided Azure Environment name, and that the Environment defaults to the Public Cloud if unspecified for backwards compatibility. I verified the existing tests pass against a government storage account after specifying `AZURE_ENVIRONMENT` as `AzureUSGovernmentCloud` in the backup URL query parameters: ``` nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:38:26] $ export AZURE_ACCOUNT_NAME=account nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:38:42] $ export AZURE_ACCOUNT_KEY=*** nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:39:25] $ export AZURE_CONTAINER=container nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:39:48] $ export AZURE_ENVIRONMENT=AzureUSGovernmentCloud nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓 2022-04-22 17:40:15] $ bazel test --test_output=streamed --test_arg=-test.v --action_env=AZURE_ACCOUNT_NAME --action_env=AZURE_ACCOUNT_KEY --action_env=AZURE_CONTAINER --action_env=AZURE_ENVIRONMENT //pkg/cloud/azure:azure_test INFO: Invocation ID: aa88a942-f3c7-4df6-bade-8f5f0e18041f WARNING: Streamed test output requested. All tests will be run locally, without sharding, one at a time INFO: Build option --action_env has changed, discarding analysis cache. INFO: Analyzed target //pkg/cloud/azure:azure_test (468 packages loaded, 16382 targets configured). INFO: Found 1 test target... initialized metamorphic constant "span-reuse-rate" with value 28 === RUN TestAzure === RUN TestAzure/simple_round_trip === RUN TestAzure/exceeds-4mb-chunk === RUN TestAzure/exceeds-4mb-chunk/rand-readats === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#00 cloud_test_helpers.go:226: read 3345 of file at 4778744 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#1 cloud_test_helpers.go:226: read 7228 of file at 226589 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#2 cloud_test_helpers.go:226: read 634 of file at 256284 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#3 cloud_test_helpers.go:226: read 7546 of file at 3546208 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#4 cloud_test_helpers.go:226: read 24123 of file at 4821795 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#5 cloud_test_helpers.go:226: read 16899 of file at 403428 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#6 cloud_test_helpers.go:226: read 29467 of file at 4886370 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#7 cloud_test_helpers.go:226: read 11700 of file at 1876920 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/#8 cloud_test_helpers.go:226: read 2928 of file at 489781 === RUN TestAzure/exceeds-4mb-chunk/rand-readats/cockroachdb#9 cloud_test_helpers.go:226: read 19933 of file at 1483342 === RUN TestAzure/read-single-file-by-uri === RUN TestAzure/write-single-file-by-uri === RUN TestAzure/file-does-not-exist === RUN TestAzure/List === RUN TestAzure/List/root === RUN TestAzure/List/file-slash-numbers-slash === RUN TestAzure/List/root-slash === RUN TestAzure/List/file === RUN TestAzure/List/file-slash === RUN TestAzure/List/slash-f === RUN TestAzure/List/nothing === RUN TestAzure/List/delim-slash-file-slash === RUN TestAzure/List/delim-data --- PASS: TestAzure (34.81s) --- PASS: TestAzure/simple_round_trip (9.66s) --- PASS: TestAzure/exceeds-4mb-chunk (16.45s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats (6.41s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#00 (0.15s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#1 (0.64s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#2 (0.65s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#3 (0.60s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#4 (0.75s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#5 (0.80s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#6 (0.75s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#7 (0.65s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#8 (0.65s) --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/cockroachdb#9 (0.77s) --- PASS: TestAzure/read-single-file-by-uri (0.60s) --- PASS: TestAzure/write-single-file-by-uri (0.60s) --- PASS: TestAzure/file-does-not-exist (1.05s) --- PASS: TestAzure/List (2.40s) --- PASS: TestAzure/List/root (0.30s) --- PASS: TestAzure/List/file-slash-numbers-slash (0.30s) --- PASS: TestAzure/List/root-slash (0.30s) --- PASS: TestAzure/List/file (0.30s) --- PASS: TestAzure/List/file-slash (0.30s) --- PASS: TestAzure/List/slash-f (0.30s) --- PASS: TestAzure/List/nothing (0.15s) --- PASS: TestAzure/List/delim-slash-file-slash (0.15s) --- PASS: TestAzure/List/delim-data (0.30s) === RUN TestAntagonisticAzureRead --- PASS: TestAntagonisticAzureRead (103.90s) === RUN TestParseAzureURL === RUN TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset === RUN TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT --- PASS: TestParseAzureURL (0.00s) --- PASS: TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset (0.00s) --- PASS: TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT (0.00s) === RUN TestMakeAzureStorageURLFromEnvironment === RUN TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud === RUN TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud --- PASS: TestMakeAzureStorageURLFromEnvironment (0.00s) --- PASS: TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud (0.00s) --- PASS: TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud (0.00s) PASS Target //pkg/cloud/azure:azure_test up-to-date: _bazel/bin/pkg/cloud/azure/azure_test_/azure_test INFO: Elapsed time: 159.865s, Critical Path: 152.35s INFO: 66 processes: 2 internal, 64 darwin-sandbox. INFO: Build completed successfully, 66 total actions //pkg/cloud/azure:azure_test PASSED in 139.9s INFO: Build completed successfully, 66 total actions ``` 80705: kvclient: fix gRPC stream leak in rangefeed client r=tbg,srosenberg a=erikgrinaker When the DistSender rangefeed client received a `RangeFeedError` message and propagated a retryable error up the stack, it would fail to close the existing gRPC stream, causing stream/goroutine leaks. Release note (bug fix): Fixed a goroutine leak when internal rangefeed clients received certain kinds of retriable errors. 80762: joberror: add ConnectionReset/ConnectionRefused to retryable err allow list r=miretskiy a=adityamaru Bulk jobs will no longer treat `sysutil.IsErrConnectionReset` and `sysutil.IsErrConnectionRefused` as permanent errors. IMPORT, RESTORE and BACKUP will treat this error as transient and retry. Release note: None 80773: backupccl: break dependency to testcluster r=irfansharif a=irfansharif Noticed we were building testing library packages when building CRDB binaries. $ bazel query "somepath(//pkg/cmd/cockroach-short, //pkg/testutils/testcluster)" //pkg/cmd/cockroach-short:cockroach-short //pkg/cmd/cockroach-short:cockroach-short_lib //pkg/ccl:ccl //pkg/ccl/backupccl:backupccl //pkg/testutils/testcluster:testcluster Release note: None Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com> Co-authored-by: Nathan Lowe <nathan.lowe@spacex.com> Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com> Co-authored-by: Aditya Maru <adityamaru@gmail.com> Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.