-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ajwerner/wrapped descriptors #4
Commits on May 28, 2020
-
Merge cockroachdb#49475 cockroachdb#49662
49475: opt: create library that determines how joins affect input rows r=andy-kimball a=DrewKimball Previously, there was no simple way to determine whether all rows from a join input will be included in its output, nor whether input rows will be duplicated by the join. This patch adds a library that constructs a Multiplicity struct for join operators. The Multiplicity can be queried for information about how a join will affect its input rows (e.g. duplicated, filtered and/or null-extended). The existing SimplifyLeftJoinWithFilters rule has been refactored to use this library. The Multiplicity library will also be useful for future join elimination and limit pushdown rules. Release note: None 49662: roachtest: don't run schema change workload on 19.2 releases r=spaskob a=spaskob Fixes cockroachdb#47024. Release note (bug fix): The schema change workload is meant for testing the behavior of schema changes on clusters with nodes with min version 19.2. It will deadlock on earlier versions. Co-authored-by: Drew Kimball <andrewekimball@gmail.com> Co-authored-by: Spas Bojanov <spas@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 9e1666b - Browse repository at this point
Copy the full SHA 9e1666bView commit details
Commits on May 29, 2020
-
49611: searchpath: add pg_extension to searchpath.Iter() before public r=rohany a=otan Release note (sql change): When doing name resolution via search path, the `pg_extension` schema (containing tables such as `geometry_columns`, `geography_columns` and `spatial_ref_sys`) will have an attempted resolution before the `public` schema. This mimics PostGIS behavior where the aforementioned tables are in the public schema, and so by default are discoverable tables with a new CLI session. Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for f6ce99f - Browse repository at this point
Copy the full SHA f6ce99fView commit details -
lease: fix flakey TestRangefeedUpdatesHandledProperlyInTheFaceOfRaces
When this test was introduced, the cluster version override was added to ensure that rangefeeds would be used. This was not necessary given the other testing knob which this change adopts. The problem with overrriding the version is that it can happen after the automatic version upgrade already occurs leading to a fatal error due to trying to downgrade the cluster version. Fixes cockroachdb#49632. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 5e33b5c - Browse repository at this point
Copy the full SHA 5e33b5cView commit details -
opt: fold join simplification rules
Previously, there were two pairs of rules for join simplification; one to handle joins with filters, and one to handle cross joins. Since the filtersMatchAllLeftRows function can now handle cross joins, there is no reason to have two sets of rules. This patch folds the four join simplification rules into two rules: SimplifyLeftJoin and SimplifyRightJoin. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 4d1f7ba - Browse repository at this point
Copy the full SHA 4d1f7baView commit details -
49681: lease: fix flakey TestRangefeedUpdatesHandledProperlyInTheFaceOfRaces r=ajwerner a=ajwerner When this test was introduced, the cluster version override was added to ensure that rangefeeds would be used. This was not necessary given the other testing knob which this change adopts. The problem with overrriding the version is that it can happen after the automatic version upgrade already occurs leading to a fatal error due to trying to downgrade the cluster version. Release note: None Co-authored-by: Andrew Werner <ajwerner@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 980fc7e - Browse repository at this point
Copy the full SHA 980fc7eView commit details -
lease: un-track deprecated use of Gossip
The work to remove the dependency has been done, so we're just waiting until we can remove the code. Until then, make sure this use of Gossip does not show up as a prominent caller to DeprecatedGossip any more, for easier bookkeeping. Release note: None
Configuration menu - View commit details
-
Copy full SHA for a53153c - Browse repository at this point
Copy the full SHA a53153cView commit details -
ui: setup Cypress configuration for screenshot testing
Visual regression testing is accomplished by `cypress-image-snapshot` plugin. The main issue solved with this change is how modules are loaded from custom `node_modules` location (./opt/node_modules). - for .ts files `baseUrl` and `typeRoots` are pointing to `./opt/node_modules` to resolve packages and typings. - Webpack configuration is extended with additional module resolver. - Plugin modules (which aren't processed by webpack) use helper function which appends path to `opt/node_modules` to required module name. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 8b02b0c - Browse repository at this point
Copy the full SHA 8b02b0cView commit details -
Makefile: fix handling of non-existent *.eg.go files
When a `bin/*.d` file references an `*.eg.go` file, that file was required to exist or be buildable by the Makefile. Any commit which removed a `*.eg.go` file would violate this requirement causing the build to fail until the offending `bin/*.d` file was removed. In order to prevent this badness, a catchall `%.eg.go` rule is added which will force the target dependent on the `bin/%.d` file to be rebuilt. Fixes cockroachdb#49676 Release note: None
Configuration menu - View commit details
-
Copy full SHA for 5868836 - Browse repository at this point
Copy the full SHA 5868836View commit details -
opt: fix bug with histograms and multi-span index constraints
Prior to this commit, filtering a histogram with a multi-span index constraint could lead to incorrect results if the histogram column was part of the exact prefix of the constraint. This was due to the fact that the same value appeared in multiple spans, breaking the assumption of the histogram code that values in spans were always ordered and non-overlapping. For example, filtering a histogram on column b with the constraint /b/c/: [/1/2 - /1/2] [/1/4 - /1/4] would return an incorrect result, because the values in the matching histogram bucket would be counted twice. This commit fixes the problem by only considering a single span if the column is part of the exact prefix. Release note (performance improvement): Fixed a bug in the histogram logic in the optimizer which was causing an over-estimate for the cardinality of constrained index scans in some cases when multiple columns of the index were constrained. This problem was introduced early in the development for the 20.2 release so should not have ever been part of a release. The fix improves the optimizer's cardinality estimates so may result in better query plan selection.
Configuration menu - View commit details
-
Copy full SHA for 1353259 - Browse repository at this point
Copy the full SHA 1353259View commit details -
opt: fix distinct count estimates for index constraint columns
Prior to this commit, the statistics_builder was incorrectly estimating the distinct count of columns that were only slightly constrained as part of an index constraint. For example, it was estimating based on constraints such as /a/b: [/1 - /5/6] or /a/b: [ - /5/6] that the distinct count of column b should be reduced by 2/3. However, in reality, we cannot assume anything about the distinct count of column b based on those two constraints. This commit fixes the estimate by only reducing the distinct count for columns that are part of the prefix of the constraint (columns for which all the spans have the same start and end values) or the first column after. Release note (performance improvement): Fixed the optimizer's distinct count estimate for columns constrained by an index constraint, which was too low in some cases. The fix improves the optimizer's cardinality estimates, which can lead to better query plan selection.
Configuration menu - View commit details
-
Copy full SHA for 2fced08 - Browse repository at this point
Copy the full SHA 2fced08View commit details -
opt: ensure multi-col stats are consistent with single-col stats
Prior to this commit, it was possible that the estimated distinct count of a multi-column statistic was larger than the product of the estimated distinct counts of all of its individual columns. This could happen because we reduce the distinct count of columns that are constrained by a predicate, but we don't update the distinct counts of any multi-column stats that contain that column. This is internally inconsistent and could lead to bad cardinality estimates. This commit fixes the problem by explicitly checking for such inconsistencies and fixing them after the stats for each operator are built or a column stat is calculated. If it is found that the estimated distinct count is too high, it is reset to the product of the distinct counts of its constituent columns. This commit also adds a check that the distinct count of the multi-column statistic is no smaller than the largest distinct count of its constituent columns (although I haven't yet found a test case where this was a problem). Release note (performance improvement): Fixed the optimizer's estimated distinct count for a multi-column statistic when all of the columns in the statistic are constrained by a filter predicate. The fix can lead to improved cardinality estimates, leading to better query plan selection in some cases.
Configuration menu - View commit details
-
Copy full SHA for dbcde09 - Browse repository at this point
Copy the full SHA dbcde09View commit details -
49696: Makefile: fix handling of non-existent *.eg.go files r=petermattis a=petermattis When a `bin/*.d` file references an `*.eg.go` file, that file was required to exist or be buildable by the Makefile. Any commit which removed a `*.eg.go` file would violate this requirement causing the build to fail until the offending `bin/*.d` file was removed. In order to prevent this badness, a catchall `%.eg.go` rule is added which will force the target dependent on the `bin/%.d` file to be rebuilt. Fixes cockroachdb#49676 Release note: None Co-authored-by: Peter Mattis <petermattis@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 6740c94 - Browse repository at this point
Copy the full SHA 6740c94View commit details -
opt: add support for calculating selectivity from multi-column stats
This commit adds support for calculating selectivity from multi-column statistics. It changes selectivityFromDistinctCounts to have the following semantics: selectivityFromDistinctCounts calculates the selectivity of a filter by using estimated distinct counts of each constrained column before and after the filter was applied. We can perform this calculation in two different ways: (1) by treating the columns as completely independent, or (2) by assuming they are correlated. (1) Assuming independence between columns, we can calculate the selectivity by taking the product of selectivities of each constrained column. In the general case, this can be represented by the formula: ``` ┬-┬ ⎛ new_distinct(i) ⎞ selectivity = │ │ ⎜ --------------- ⎟ ┴ ┴ ⎝ old_distinct(i) ⎠ i in {constrained columns} ``` (2) If useMultiCol is true, we assume there is some correlation between columns. In this case, we calculate the selectivity using multi-column statistics. ``` ⎛ new_distinct({constrained columns}) ⎞ selectivity = ⎜ ----------------------------------- ⎟ ⎝ old_distinct({constrained columns}) ⎠ ``` This formula looks simple, but the challenge is that it is difficult to determine the correct value for new_distinct({constrained columns}) if each column is not constrained to a single value. For example, if new_distinct(x)=2 and new_distinct(y)=2, new_distinct({x,y}) could be 2, 3 or 4. We estimate the new distinct count as follows, using the concept of "soft functional dependency (FD) strength": ``` new_distinct({x,y}) = min_value + range * (1 - FD_strength_scaled) where min_value = max(new_distinct(x), new_distinct(y)) max_value = new_distinct(x) * new_distinct(y) range = max_value - min_value ⎛ max(old_distinct(x),old_distinct(y)) ⎞ FD_strength = ⎜ ------------------------------------ ⎟ ⎝ old_distinct({x,y}) ⎠ ⎛ max(old_distinct(x), old_distinct(y)) ⎞ min_FD_strength = ⎜ ------------------------------------- ⎟ ⎝ old_distinct(x) * old_distinct(y) ⎠ ⎛ FD_strength - min_FD_strength ⎞ FD_strength_scaled = ⎜ ----------------------------- ⎟ ⎝ 1 - min_FD_strength ⎠ ``` Suppose that old_distinct(x)=100 and old_distinct(y)=10. If x and y are perfectly correlated, old_distinct({x,y})=100. Using the example from above, new_distinct(x)=2 and new_distinct(y)=2. Plugging in the values into the equation, we get: ``` FD_strength_scaled = 1 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 1) = 2 ``` If x and y are completely independent, however, old_distinct({x,y})=1000. In this case, we get: ``` FD_strength_scaled = 0 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 0) = 4 ``` Note that even if useMultiCol is true and we calculate the selectivity based on equation (2) above, we still want to take equation (1) into account. This is because it is possible that there are two predicates that each have selectivity s, but the multi-column selectivity is also s. In order to ensure that the cost model considers the two predicates combined to be more selective than either one individually, we must give some weight to equation (1). Therefore, instead of equation (2) we actually return the following selectivity: ``` selectivity = (1 - w) * (eq. 1) + w * (eq. 2) ``` where w currently set to 0.9. This selectivity will be used later to update the row count and the distinct count for the unconstrained columns. Fixes cockroachdb#34422 Release note (performance improvement): Added support for calculating the selectivity of filter predicates in the optimizer using multi-column statistics. This improves the cardinality estimates of the optimizer when a query has filter predicates constraining multiple columns. As a result, the optimizer may choose a better query plan in some cases.
Configuration menu - View commit details
-
Copy full SHA for df34c56 - Browse repository at this point
Copy the full SHA df34c56View commit details -
Updated path to Dockerfile in README
Aaron Blum committedMay 29, 2020 Configuration menu - View commit details
-
Copy full SHA for 40a116d - Browse repository at this point
Copy the full SHA 40a116dView commit details -
build: remove muslc from the building process
This commit removes building muslc from the release process. It is causing issues with the GEOS build, and we're not confident it is worth spending the time to debug it. * Delete mentions of muslc from publishing artifacts / provisional artifacts. * Delete muslc from the Docker image. * Remove Makefile items that are musl-c related. * Unfortunately, I am not able to push the Docker image to cockroachdb/builder. I'm not sure that's important atm. Release note (general change): Removed the publication of muslc CockroachDB builds.
Configuration menu - View commit details
-
Copy full SHA for 4a307bc - Browse repository at this point
Copy the full SHA 4a307bcView commit details -
49560: geo/geoproj: initial framework to import the PROJ library r=petermattis,sumeerbhola a=otan This commit introduces PROJ v4.9.3 to CockroachDB's compilation. We cannot use a later version as we would like to exclude installing sqlite3 as a pre-requisite for now. To get this to work, `bin/execgen` must depend on the `LIBPROJ` target to compile, and removed the c-deps ban on `pkg/geo/geoproj`. Added some stub functions (non-finalized) that test whether functionality works as expected. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for d6a4742 - Browse repository at this point
Copy the full SHA d6a4742View commit details -
Merge cockroachdb#49656 cockroachdb#49699
49656: sql: fix panic when attempting to clone a table with a UDT r=rohany a=rohany Due to how the type metadata was represented, there could have been structs with unimplemented fields in the `TypeMeta`'s `UserDefinedTypeName`. This would cause protobuf to panic when attempting to perform reflection into `types.T` when cloning. Release note: None 49699: Fixed code rot in builder README r=aaron-crl a=aaron-crl Updated Dockerfile path in README. Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu> Co-authored-by: Aaron Blum <aaron@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for b6d9f9f - Browse repository at this point
Copy the full SHA b6d9f9fView commit details -
sqlbase: Add created_by columns to system.jobs
Add created_by_type and created_by_id columns, along with the index over these columns, to the system.jobs table. Add sql migration code to migrate old definition to the new one. See https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20200414_scheduled_jobs.md Release Notes: None
Yevgeniy Miretskiy committedMay 29, 2020 Configuration menu - View commit details
-
Copy full SHA for 5939820 - Browse repository at this point
Copy the full SHA 5939820View commit details -
This change adds VolatilityLeakProof and changes a few hashing builtins to use it (for consistency with postgres). In postgres, `proleakproof` is a flag separate from `provolatile`, but all instances of leak-proof operators are immutable. We also change the volatility to a simple index since it no longer maps to the postgres volatility directly. In the optimizer, we will need to have a bitfield for which types of volatilities are contained in an expression so having them be small consecutive integers is desirable. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 1a1207a - Browse repository at this point
Copy the full SHA 1a1207aView commit details -
sql: create schemaexpr package
This commit adds a new package for dealing with expressions within table schemas, such as check constraints, computed columns, and partial index predicates. The utility provided previously by the following functions have been moved and refactored into this new package: dequalifyColumnRefs - Now provided by DequalifyColumnRefs. makeCheckConstraint - Now provided by CheckConstraintBuilder. validateComputedColumn - Now provided by ComputedColumnValidator. validateIndexPredicate - Now provided by IndexPredicateValidator. In addition, several helper functions were refactored into the new package, including generateMaybeDuplicateNameForCheckConstraint, generateNameForCheckConstraint, iterColDescriptorsInExpr, and replaceVars. dummyColumnItems were also moved into the new package. Release note: None
Configuration menu - View commit details
-
Copy full SHA for f895893 - Browse repository at this point
Copy the full SHA f895893View commit details -
49685: build: remove muslc from the building process r=otan a=otan This commit removes building muslc from the release process. It is causing issues with the GEOS build, and we're not confident it is worth spending the time to debug it. * Delete mentions of muslc from publishing artifacts / provisional artifacts. * Delete muslc from the Docker image. * Remove Makefile items that are musl-c related. * Unfortunately, I am not able to push the Docker image to cockroachdb/builder. I'm not sure that's important atm. Release note (general change): Removed the publication of muslc CockroachDB builds. Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 569d277 - Browse repository at this point
Copy the full SHA 569d277View commit details -
49134: opt: add support for calculating selectivity from multi-column stats r=rytaft a=rytaft **opt: fix bug with histograms and multi-span index constraints** Prior to this commit, filtering a histogram with a multi-span index constraint could lead to incorrect results if the histogram column was part of the exact prefix of the constraint. This was due to the fact that the same value appeared in multiple spans, breaking the assumption of the histogram code that values in spans were always ordered and non-overlapping. For example, filtering a histogram on column b with the constraint `/b/c/: [/1/2 - /1/2] [/1/4 - /1/4]` would return an incorrect result, because the values in the matching histogram bucket would be counted twice. This commit fixes the problem by only considering a single span if the column is part of the exact prefix. Release note (performance improvement): Fixed a bug in the histogram logic in the optimizer which was causing an over-estimate for the cardinality of constrained index scans in some cases when multiple columns of the index were constrained. This problem was introduced early in the development for the 20.2 release so should not have ever been part of a release. The fix improves the optimizer's cardinality estimates so may result in better query plan selection. **opt: fix distinct count estimates for index constraint columns** Prior to this commit, the statistics_builder was incorrectly estimating the distinct count of columns that were only slightly constrained as part of an index constraint. For example, it was estimating based on constraints such as `/a/b: [/1 - /5/6]` or `/a/b: [ - /5/6]` that the distinct count of column b should be reduced by 2/3. However, in reality, we cannot assume anything about the distinct count of column b based on those two constraints. This commit fixes the estimate by only reducing the distinct count for columns that are part of the prefix of the constraint (columns for which all the spans have the same start and end values) or the first column after. Release note (performance improvement): Fixed the optimizer's distinct count estimate for columns constrained by an index constraint, which was too low in some cases. The fix improves the optimizer's cardinality estimates, which can lead to better query plan selection. **opt: ensure multi-col stats are consistent with single-col stats** Prior to this commit, it was possible that the estimated distinct count of a multi-column statistic was larger than the product of the estimated distinct counts of all of its individual columns. This could happen because we reduce the distinct count of columns that are constrained by a predicate, but we don't update the distinct counts of any multi-column stats that contain that column. This is internally inconsistent and could lead to bad cardinality estimates. This commit fixes the problem by explicitly checking for such inconsistencies and fixing them after the stats for each operator are built or a column stat is calculated. If it is found that the estimated distinct count is too high, it is reset to the product of the distinct counts of its constituent columns. This commit also adds a check that the distinct count of the multi-column statistic is no smaller than the largest distinct count of its constituent columns (although I haven't yet found a test case where this was a problem). Release note (performance improvement): Fixed the optimizer's estimated distinct count for a multi-column statistic when all of the columns in the statistic are constrained by a filter predicate. The fix can lead to improved cardinality estimates, leading to better query plan selection in some cases. **opt: add support for calculating selectivity from multi-column stats** This commit adds support for calculating selectivity from multi-column statistics. It changes `selectivityFromDistinctCounts` to have the following semantics: `selectivityFromDistinctCounts` calculates the selectivity of a filter by using estimated distinct counts of each constrained column before and after the filter was applied. We can perform this calculation in two different ways: (1) by treating the columns as completely independent, or (2) by assuming they are correlated. (1) Assuming independence between columns, we can calculate the selectivity by taking the product of selectivities of each constrained column. In the general case, this can be represented by the formula: ``` ┬-┬ ⎛ new_distinct(i) ⎞ selectivity = │ │ ⎜ --------------- ⎟ ┴ ┴ ⎝ old_distinct(i) ⎠ i in {constrained columns} ``` (2) If `useMultiCol` is true, we assume there is some correlation between columns. In this case, we calculate the selectivity using multi-column statistics. ``` ⎛ new_distinct({constrained columns}) ⎞ selectivity = ⎜ ----------------------------------- ⎟ ⎝ old_distinct({constrained columns}) ⎠ ``` This formula looks simple, but the challenge is that it is difficult to determine the correct value for new_distinct({constrained columns}) if each column is not constrained to a single value. For example, if new_distinct(x)=2 and new_distinct(y)=2, new_distinct({x,y}) could be 2, 3 or 4. We estimate the new distinct count as follows, using the concept of "soft functional dependency (FD) strength": ``` new_distinct({x,y}) = min_value + range * (1 - FD_strength_scaled) where min_value = max(new_distinct(x), new_distinct(y)) max_value = new_distinct(x) * new_distinct(y) range = max_value - min_value ⎛ max(old_distinct(x),old_distinct(y)) ⎞ FD_strength = ⎜ ------------------------------------ ⎟ ⎝ old_distinct({x,y}) ⎠ ⎛ max(old_distinct(x), old_distinct(y)) ⎞ min_FD_strength = ⎜ ------------------------------------- ⎟ ⎝ old_distinct(x) * old_distinct(y) ⎠ ⎛ FD_strength - min_FD_strength ⎞ FD_strength_scaled = ⎜ ----------------------------- ⎟ ⎝ 1 - min_FD_strength ⎠ ``` Suppose that old_distinct(x)=100 and old_distinct(y)=10. If x and y are perfectly correlated, old_distinct({x,y})=100. Using the example from above, new_distinct(x)=2 and new_distinct(y)=2. Plugging in the values into the equation, we get: ``` FD_strength_scaled = 1 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 1) = 2 ``` If x and y are completely independent, however, old_distinct({x,y})=1000. In this case, we get: ``` FD_strength_scaled = 0 new_distinct({x,y}) = 2 + (4 - 2) * (1 - 0) = 4 ``` Note that even if `useMultiCol` is true and we calculate the selectivity based on equation (2) above, we still want to take equation (1) into account. This is because it is possible that there are two predicates that each have selectivity s, but the multi-column selectivity is also s. In order to ensure that the cost model considers the two predicates combined to be more selective than either one individually, we must give some weight to equation (1). Therefore, instead of equation (2) we actually return the following selectivity: ``` selectivity = (1 - w) * (eq. 1) + w * (eq. 2) ``` where w currently set to 0.9. This selectivity is used update the row count and the distinct count for the unconstrained columns. Fixes cockroachdb#34422 Release note (performance improvement): Added support for calculating the selectivity of filter predicates in the optimizer using multi-column statistics. This improves the cardinality estimates of the optimizer when a query has filter predicates constraining multiple columns. As a result, the optimizer may choose a better query plan in some cases. Co-authored-by: Rebecca Taft <becca@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 6dd95c2 - Browse repository at this point
Copy the full SHA 6dd95c2View commit details -
vendor: bump pebble to a41e4ac929214c1aaf7d152e2042f04f960f63ac
a41e4a *: Update level_checker to account for sublevels b14003 internal/manifest: Relax L0 CheckOrdering invariants for flush splits 9249ee db: add max memtable and max batch size checks bb2f85 vfs: return nil from RemoveAll for nonexistent paths eb5d41 db: use compaction picker's scores directly in (*db).Metrics 65313e cmd/pebble: sample read amp throughout compaction benchmarks Release note: None
Configuration menu - View commit details
-
Copy full SHA for e9e8610 - Browse repository at this point
Copy the full SHA e9e8610View commit details -
colflow: use storage/FS.RemoveAll
Use the new storage/FS.RemoveAll method to clean up temporary directories. Release note: none
Configuration menu - View commit details
-
Copy full SHA for 96518da - Browse repository at this point
Copy the full SHA 96518daView commit details -
Merge cockroachdb#49444 cockroachdb#49680 cockroachdb#49713
49444: sqlbase: Add created_by columns to system.jobs r=miretskiy a=miretskiy Informs cockroachdb#49346 Add created_by_type and created_by_id columns, along with the index over these columns, to the system.jobs table. Add sql migration code to migrate old definition to the new one. See https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20200414_scheduled_jobs.md Release Notes: None 49680: sql: add LeakProof volatility r=RaduBerinde a=RaduBerinde This change adds VolatilityLeakProof and changes a few hashing builtins to use it (for consistency with postgres). In postgres, `proleakproof` is a flag separate from `provolatile`, but all instances of leak-proof operators are immutable. We also change the volatility to a simple index since it no longer maps to the postgres volatility directly. In the optimizer, we will need to have a bitfield for which types of volatilities are contained in an expression so having them be small consecutive integers is desirable. Release note: None ![image](https://user-images.githubusercontent.com/16544120/83211087-13631d00-a111-11ea-9a10-cae758a4cfbf.png) 49713: vendor: bump pebble to a41e4ac9 r=jbowens a=jbowens a41e4a *: Update level_checker to account for sublevels b14003 internal/manifest: Relax L0 CheckOrdering invariants for flush splits 9249ee db: add max memtable and max batch size checks bb2f85 vfs: return nil from RemoveAll for nonexistent paths eb5d41 db: use compaction picker's scores directly in (*db).Metrics 65313e cmd/pebble: sample read amp throughout compaction benchmarks Co-authored-by: Yevgeniy Miretskiy <yevgeniy@cockroachlabs.com> Co-authored-by: Radu Berinde <radu@cockroachlabs.com> Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for c52e94c - Browse repository at this point
Copy the full SHA c52e94cView commit details -
sql: don't split ranges on user-defined types or schemas
Noticed while working on multi-tenancy and confirmed that this was an issue manually. ``` root@127.0.0.1:51841/movr> set experimental_enable_enums=true; SET Time: 259µs root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 62 (1 row) Time: 2.251ms root@127.0.0.1:51841/movr> CREATE TYPE t AS ENUM (); CREATE TYPE Time: 1.698ms root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 63 (1 row) Time: 1.815ms ``` I tried to write a logic test that did essentially this, but that ended up being pretty frustrating to get right and the end result seemed too fragile to be worth it.
Configuration menu - View commit details
-
Copy full SHA for 3c6081f - Browse repository at this point
Copy the full SHA 3c6081fView commit details -
49667: sql: don't split ranges on user-defined types or schemas r=nvanbenschoten a=nvanbenschoten Noticed while working on multi-tenancy and confirmed that this was an issue manually. ``` root@127.0.0.1:51841/movr> set experimental_enable_enums=true; SET Time: 259µs root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 62 (1 row) Time: 2.251ms root@127.0.0.1:51841/movr> CREATE TYPE t AS ENUM (); CREATE TYPE Time: 1.698ms root@127.0.0.1:51841/movr> select count(*) from crdb_internal.ranges; count --------- 63 (1 row) Time: 1.815ms ``` I tried to write a logic test that did essentially this, but that ended up being pretty frustrating to get right and the end result seemed too fragile to be worth it. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 95916e4 - Browse repository at this point
Copy the full SHA 95916e4View commit details -
sql: allow NULL IN <subquery> to typecheck
Release note (sql change): Previously, using NULL (or tuples containing NULLs) as the left-hand-side of an IN operator would not typecheck unless the NULLs were explicitly casted. Now, the casting is not required.
Configuration menu - View commit details
-
Copy full SHA for d0e9ea5 - Browse repository at this point
Copy the full SHA d0e9ea5View commit details -
encoding: define EncodeTableKey / DecodeTableKey for Geo types
For GROUP BY to work, there needs to be a way of encoding / decoding table keys for each column type. As such, define EncodeTableKey and DecodeTableKey for geo types to be their underlying protobuf representation (we currently have no better way!). Note that this does not yet allow us to support PKs for geospatial (we could, but we need some more iteration beforehand). I've shifted the array descriptor markers around as they seem to be new in 20.2 as well -- so they should be stable in the upcoming alpha release. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 81aef92 - Browse repository at this point
Copy the full SHA 81aef92View commit details
Commits on May 30, 2020
-
49419: colexec: add binary overloads on datum-backed types r=yuzefovich a=yuzefovich **tree, colexec: preliminary steps for datum-backed binary overloads** This commit does a few things that are preliminary to adding support of binary operations on datum-backed vectors: 1. it exports `fn` field of `tree.BinaryExpr` - we will be using this function to evaluate the binary operation on datum-backed vectors 2. it renames `decimalOverloadScratch` to `overloadHelper` and plumbs it from the planning stage for projection and selection operators. This struct will be used to supply the resolved `BinaryExpr.Fn` function (and possibly other information) later 3. it cleans up `vectorize_types` logic test a bit since we now support all types. Release note: None **colexec: add support for binary operations on datum-backed types** This commit adds the support for 4 binary operations when at least one of the arguments is datum-backed vector and the result is datum-backed vector as well. We reuse the same function that row engine uses when evaluating such expressions, and the function is plumbed during the planning via `overloadHelper` struct. Native (i.e. not datum-backed) arguments are converted to datums at runtime. Release note: None Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 094f563 - Browse repository at this point
Copy the full SHA 094f563View commit details -
storage: remove compaction-debt based ingestion backpressure
Fixes cockroachdb#49716 Release note (performance improvement): Remove compaction-debt based sstable ingestion backpressure which was artificially slowing down IMPORTs and RESTOREs on Pebble and not providing any utility on RocksDB. Removed the private "rocksdb.ingest_backpressure.pending_compaction_threshold" cluster setting.
Configuration menu - View commit details
-
Copy full SHA for 65138ca - Browse repository at this point
Copy the full SHA 65138caView commit details -
Remove the Dummy import plague on the template files by using the `goimports` infrastructure to make the .eg.go files get their imports at generation time. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 053450a - Browse repository at this point
Copy the full SHA 053450aView commit details -
instead of repeated strings.ReplaceAll Release note: None
Configuration menu - View commit details
-
Copy full SHA for 2735048 - Browse repository at this point
Copy the full SHA 2735048View commit details -
sql: create implicit array types for UDTs
Fixes cockroachdb#49197. This PR adds the Postgres behavior of creating an implicit array type for a user defined type when it is created. The implicit array type will track the user defined type. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 8ee992b - Browse repository at this point
Copy the full SHA 8ee992bView commit details -
49678: sql: create implicit array types for UDTs r=otan,jordanlewis a=rohany Fixes cockroachdb#49197. This PR adds the Postgres behavior of creating an implicit array type for a user defined type when it is created. The implicit array type will track the user defined type. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for 021d876 - Browse repository at this point
Copy the full SHA 021d876View commit details -
changefeedccl: retry posting to confluent registry
Sometimes we see timeouts or other issues contacting the schema registry. However it would be nice if transient errors could be handled gracefully without bubbling up to the user. Release note (enterprise change): Changefeeds now retry after encountering transient errors contacting the Confluent Schema Registry.
Configuration menu - View commit details
-
Copy full SHA for fc8d4c0 - Browse repository at this point
Copy the full SHA fc8d4c0View commit details -
49726: storage: remove compaction-debt based ingestion backpressure r=petermattis a=petermattis Fixes cockroachdb#49716 Release note: Remove compaction-debt based sstable ingestion backpressure which was artificially slowing down IMPORTs and RESTOREs on Pebble and not providing any utility on RocksDB. Removed the private "rocksdb.ingest_backpressure.pending_compaction_threshold" cluster setting. Co-authored-by: Peter Mattis <petermattis@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for ff0ce40 - Browse repository at this point
Copy the full SHA ff0ce40View commit details -
48759: changefeedccl: retry posting to confluent registry up to 10 times r=dt a=dt Sometimes we see timeouts or other issues contacting the schema registry. However it would be nice if transient errors could be handled gracefully without bubbling up to the user. Release note (enterprise change): Changefeeds now retry after encountering transient errors contacting the Confluent Schema Registry. Co-authored-by: David Taylor <tinystatemachine@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 47a6a02 - Browse repository at this point
Copy the full SHA 47a6a02View commit details -
49727: execgen: some small improvements r=jordanlewis a=jordanlewis - Use `imports` to avoid having to precisely specify template imports - Use `strings.Replacer` to streamline some code Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 77fa68e - Browse repository at this point
Copy the full SHA 77fa68eView commit details -
49687: opt: fold join simplification rules r=andy-kimball a=DrewKimball Previously, there were two pairs of rules for join simplification; one to handle joins with filters, and one to handle cross joins. Since the filtersMatchAllLeftRows function can now handle cross joins, there is no reason to have two sets of rules. This patch folds the four join simplification rules into two rules: SimplifyLeftJoin and SimplifyRightJoin. Release note: None Co-authored-by: Drew Kimball <andrewekimball@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 7bf3401 - Browse repository at this point
Copy the full SHA 7bf3401View commit details
Commits on May 31, 2020
-
49723: sql: allow NULL IN <subquery> to typecheck r=jordanlewis a=rafiss fixes cockroachdb#49651 Release note (sql change): Previously, using NULL (or tuples containing NULLs) as the left-hand-side of an IN operator would not typecheck unless the NULLs were explicitly casted. Now, the casting is not required. Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 1422dd5 - Browse repository at this point
Copy the full SHA 1422dd5View commit details
Commits on Jun 1, 2020
-
colexec: do not short-circuit hash joiner temporarily
I think concurrent calls of DrainMeta and Next are causing some flakes. This commit temporarily removes short-circuiting in the hash joiner when build side is empty. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 416a017 - Browse repository at this point
Copy the full SHA 416a017View commit details -
49719: colexec: do not short-circuit hash joiner temporarily r=yuzefovich a=yuzefovich I think concurrent calls of DrainMeta and Next are causing some flakes. This commit temporarily removes short-circuiting in the hash joiner when build side is empty. Fixes: cockroachdb#49715. Release note: None Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 187036c - Browse repository at this point
Copy the full SHA 187036cView commit details -
ui: Visual tests for Admin UI pages
Add screenshot testing for main pages in Admin UI. These tests are quite flaky and might fail because nondeterministic state of the server during tests run. Release note: None
Configuration menu - View commit details
-
Copy full SHA for d02d642 - Browse repository at this point
Copy the full SHA d02d642View commit details -
ui: Error message for non-table data on Databases page
Previously, error messages received on request for non-table data on Databases page is not handled and not displayed for users. Now, default error message is shown in response on errors. Release note (admin ui change): Show default error message about restricted permissions for non-admin users on Databases page.
Configuration menu - View commit details
-
Copy full SHA for f587bfa - Browse repository at this point
Copy the full SHA f587bfaView commit details -
*: fix the
fmtsafe
linter and fix the falloutThe `fmtsafe` linter added in cockroachdb#48040 cockroachdb#48048 was actually malfunctioning because it was not stripping the vendor prefix properly. This patch fixes it. Fixing the linter uncovered a range of defects throughout the remainder of the code, some benign and some outright bugs. Examples: ``` return pgerror.Wrapf(err, "while running %s", stmt) ``` (The second argument should be the pgcode, not the error string!) ``` return errors.Errorf(`no consistency checks are defined for %s` + gen.Meta().Name) ``` (Incompatible use of `%s` and string concatenation.) ``` errors.New(fmt.Sprintf("foo %s", blah)) ``` (Should use `Newf()` instead to capture more data in telemetry.) Release note: None
Configuration menu - View commit details
-
Copy full SHA for ecb371b - Browse repository at this point
Copy the full SHA ecb371bView commit details -
49714: colflow: use storage/fs.FS.RemoveAll r=jbowens a=jbowens Use the new storage/FS.RemoveAll method to clean up temporary directories. Release note: none Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 6c98b80 - Browse repository at this point
Copy the full SHA 6c98b80View commit details -
49660: *: fix the `fmtsafe` linter and fix the fallout r=irfansharif a=knz Found while working on cockroachdb#49447. The `fmtsafe` linter added in cockroachdb#48040 cockroachdb#48048 was actually malfunctioning because it was not stripping the vendor prefix properly. This patch fixes it. Fixing the linter uncovered a range of defects throughout the remainder of the code, some benign and some outright bugs. Examples: ``` return pgerror.Wrapf(err, "while running %s", stmt) ``` (The second argument should be the pgcode, not the error string!) ``` return errors.Errorf(`no consistency checks are defined for %s` + gen.Meta().Name) ``` (Incompatible use of `%s` and string concatenation.) ``` errors.New(fmt.Sprintf("foo %s", blah)) ``` (Should use `Newf()` instead to capture more data in telemetry.) Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for 1d94435 - Browse repository at this point
Copy the full SHA 1d94435View commit details -
geo/geomfn: accelerate spatial predicates with bounding box checks
We are able to accelerate certain spatial predicates by determining earlier on whether their bounding boxes intersect. This PR applies that change. It is worth noting that I have not applied this to geogfn yet, as the bounding box for geogfns should be 3D. That's a biggish change that I will leave for a follow up PR. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 3cc4920 - Browse repository at this point
Copy the full SHA 3cc4920View commit details -
cli: limit flag for debug range-data / debug keys
There was a teasing maxResults field already, completely unused. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 84681dd - Browse repository at this point
Copy the full SHA 84681ddView commit details -
Configuration menu - View commit details
-
Copy full SHA for 1903921 - Browse repository at this point
Copy the full SHA 1903921View commit details -
sql: fix a physical planning buglet
The code was looking at the wrong error. Release note: None
Configuration menu - View commit details
-
Copy full SHA for d5e2a11 - Browse repository at this point
Copy the full SHA d5e2a11View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9a04107 - Browse repository at this point
Copy the full SHA 9a04107View commit details -
geo: remove geoproj dependency for now
Seems to be breaking Upload Binaries (not sure why) because execgen depends on geo. The dependencies should make it work (not sure why it doesn't), but it doesn't seem important to solve this atm. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 58d5b23 - Browse repository at this point
Copy the full SHA 58d5b23View commit details -
sql: stop dealing with node addresses in DistSQL planning
And also restrict SpanResolver's interface to returning only ReplicaDescriptors, not also NodeDescriptors. This is all towards reducing the physical planning's dependency on gossip - node descriptors come from gossip. This patch doesn't yet eliminate the dependency - leaseholder oracles still use gossip to translate store ids to node ids and to order replicas, but it's a step in the right direction. Touches cockroachdb#48432 Release note: None
Configuration menu - View commit details
-
Copy full SHA for cdb4136 - Browse repository at this point
Copy the full SHA cdb4136View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9f75896 - Browse repository at this point
Copy the full SHA 9f75896View commit details -
kvclient: get rid of unused err return
... in range cache eviction functions. And from there, upwards. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 6de2e7b - Browse repository at this point
Copy the full SHA 6de2e7bView commit details -
geo: implement ST_Summary for Geography/Geometry
Fixes cockroachdb#48405, cockroachdb#49049 Release note (sql change): Implemented the geometry based builtins `ST_Summary`.
Configuration menu - View commit details
-
Copy full SHA for bbc956d - Browse repository at this point
Copy the full SHA bbc956dView commit details -
types: fix bug where calling String() on a UDT array would panic
It seems like I ran into a proto bug, opened gogo/protobuf#693. Release note: None
Configuration menu - View commit details
-
Copy full SHA for ec0d894 - Browse repository at this point
Copy the full SHA ec0d894View commit details -
opt: add rule to fold two grouping operators into one
Previously, the optimizer could not fold two grouping operators into a single equivalent grouping operator. This patch adds a rule that effects this transformation under the following conditions: 1. All of the outer aggregates are aggregating on the output columns of the inner aggregates. 2. Every inner-outer aggregation pair can be replaced with an equivalent single aggregate. 3. The inner grouping columns functionally determine the outer grouping columns. 4. Both grouping operators are unordered. As an example, the following query pairs are equivalent: ``` SELECT sum(t) FROM (SELECT sum(b) FROM ab GROUP BY a) AS g(t); SELECT sum(b) FROM ab; SELECT max(t) FROM (SELECT max(b) FROM ab GROUP BY a) AS g(t); SELECT max(b) FROM ab; SELECT sum_int(t) FROM (SELECT count(b) FROM ab GROUP BY a) AS g(t); SELECT count(b) FROM ab; ``` This situation is rare in direct SQL queries, but can arise when composing views and queries. Release note (sql change): The optimizer can now fold two grouping operators together when they are aggregating over functions like sum.
Configuration menu - View commit details
-
Copy full SHA for 2b14fcc - Browse repository at this point
Copy the full SHA 2b14fccView commit details -
c-deps/proj: bump PROJ to compile on more systems
CMake is broken for the value of `PTHREAD_MUTEX_RECURSIVE` on certain operating systems. To rectify this, we've patched PROJ at version 4.9.3 at `cc4a178` with a fix. We'll be using the CRDB fork of PROJ for the meantime to fix this for now. Release note: None
Configuration menu - View commit details
-
Copy full SHA for a6f8e44 - Browse repository at this point
Copy the full SHA a6f8e44View commit details -
49738: geo: implement ST_Summary for Geography/Geometry r=otan a=hueypark Fixes cockroachdb#48405, cockroachdb#49049 Release note (sql change): Implemented the geometry based builtins `ST_Summary`. Co-authored-by: Jaewan Park <jaewan.huey.park@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for e43d665 - Browse repository at this point
Copy the full SHA e43d665View commit details -
Merge cockroachdb#49755 cockroachdb#49756 cockroachdb#49757
49755: c-deps/proj: bump PROJ to compile on more systems r=knz a=otan CMake is broken for the value of `PTHREAD_MUTEX_RECURSIVE` on certain operating systems. To rectify this, we've patched PROJ at version 4.9.3 at `cc4a178` with a fix. We'll be using the CRDB fork of PROJ for the meantime to fix this for now. fixes cockroachdb#49749 Release note: None 49756: types: fix bug where calling String() on a UDT array would panic r=otan a=rohany It seems like I ran into a proto bug, opened gogo/protobuf#693. Release note: None 49757: geo: remove geoproj dependency for now r=otan a=otan Seems to be breaking Upload Binaries (not sure why) because execgen depends on geo. The dependencies should make it work (not sure why it doesn't), but it doesn't seem important to solve this atm (planning to hide it in some sort of dependency injection hack later). Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com> Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for 2300343 - Browse repository at this point
Copy the full SHA 2300343View commit details -
sql: remove two private settings
This commit removes two private settings that could disable planning of multiple join readers (i.e would disable distribution) and disable planning merge joiners (i.e. would simply prohibit the use of merge joiners). It also retires two settings that have been removed after 20.1 was cut. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 9d7d646 - Browse repository at this point
Copy the full SHA 9d7d646View commit details -
kvclient: simplify some confusing RangeCache code
A function dealing with inserting multiple descriptors (insertRangeDescriptorsLocked) was short-circuiting if any descriptor was found to be stale, which was causing trouble for the caller which had to call it twice. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 189f685 - Browse repository at this point
Copy the full SHA 189f685View commit details -
kvclient: remove more unused err returns in range cache
Release note: None
Configuration menu - View commit details
-
Copy full SHA for 18c764a - Browse repository at this point
Copy the full SHA 18c764aView commit details -
Configuration menu - View commit details
-
Copy full SHA for d3c4a95 - Browse repository at this point
Copy the full SHA d3c4a95View commit details -
kvclient: rationalize range cache eviction interface
The eviction interface was weird - evictions took both a key and a descriptor; they performed a lookup using the key and then they compared with the descriptor. This was seemingly done to cover two cases - evictions by key and evictions by descriptor. Everything was complicated by the "inverted" argument which modified the key in a subtle way. This patch splits the interface into an explicit eviction by key, which doesn't take a descriptor, and one by descriptor which doesn't take a key. And the inverted guy goes away, as it turns out that it's not needed any more. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 8133e97 - Browse repository at this point
Copy the full SHA 8133e97View commit details -
kvclient: remove mad indirection from range cache EvictionToken
The EvictionToken was quite hard to read because it interfaced with the RangeDescriptorCache through function pointers bound which were binding arguments for arguments to the RangeDescriptorCache's eviction and inserting methods. Prior commits have removed most of those arguments, so this indirection doesn't serve any purpose any more. Now the EvictionToken logic is readable. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 62a5c8d - Browse repository at this point
Copy the full SHA 62a5c8dView commit details -
Remove unnecessary protection against nil return. Release note: None
Configuration menu - View commit details
-
Copy full SHA for cc6f4db - Browse repository at this point
Copy the full SHA cc6f4dbView commit details -
Merge cockroachdb#49290 cockroachdb#49661 cockroachdb#49675
49290: cli: limit flag for debug range-data / debug keys r=andreimatei a=andreimatei There was a teasing maxResults field already, completely unused. Release note: None 49661: sql: remove two private settings r=yuzefovich a=yuzefovich This commit removes two private settings that could disable planning of multiple join readers (i.e would disable distribution) and disable planning merge joiners (i.e. would simply prohibit the use of merge joiners). It also retires two settings that have been removed after 20.1 was cut. Release note: None 49675: sql: stop dealing with node addresses in DistSQL planning r=andreimatei a=andreimatei And also restrict SpanResolver's interface to returning only ReplicaDescriptors, not also NodeDescriptors. This is all towards reducing the physical planning's dependency on gossip - node descriptors come from gossip. This patch doesn't yet eliminate the dependency - leaseholder oracles still use gossip to translate store ids to node ids and to order replicas, but it's a step in the right direction. Touches cockroachdb#48432 Release note: None Co-authored-by: Andrei Matei <andrei@cockroachlabs.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 6b7b413 - Browse repository at this point
Copy the full SHA 6b7b413View commit details -
49641: pgwire/pgtest: make the tests work both on pg and crdb r=rafiss,rohany a=knz There was some accumulated incompatibilities in the test files. This patch fixes it, and in the process of doing so discovers two new bugs (cockroachdb#49639 and cockroachdb#49640). Summary of changes to the DSL: - the new `only` directive skips an entire test file if the db does not match (used for the crdb-specific portal bug test file) - the new flags `crdb_only` and `noncrdb_only` skip over a test directive if the db does not match. - the new flags `ignore_table_oids` and `ignore_type_oids` replace the corresponding OIDs in the RowDescription message by 0 prior to comparing with the expected value. Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for 521b0c0 - Browse repository at this point
Copy the full SHA 521b0c0View commit details -
sql/physicalplan: small refactor
This patch lifts the concern about consistently resolving spans from a single range to the same node up to the spanResolver, from one of the replica oracles. Regardless of what oracle is used (i.e. the lease-holder aware one, or the follower-reads one), making a single node in charge of all the spans that fall within one range seems like a good idea. Currently only the leaseholder oracle was doing this. This makes SpanResolver.ReplicaInfo() idempotent. Release note: None
Configuration menu - View commit details
-
Copy full SHA for d6f18ed - Browse repository at this point
Copy the full SHA d6f18edView commit details -
sql/physicalplan: remove stale test sleep
We were sleeping to deal with range cache clobbering, but we no longer have these clobbering problems since cache updates are now resistant to stale information overwiting newer one. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 6766886 - Browse repository at this point
Copy the full SHA 6766886View commit details -
49657: sql: create schemaexpr package r=mgartner a=mgartner This commit adds a new package for dealing with expressions within table schemas, such as check constraints, computed columns, and partial index predicates. The utility provided previously by the following functions have been moved and refactored into this new package: dequalifyColumnRefs - Now provided by DequalifyColumnRefs. makeCheckConstraint - Now provided by CheckConstraintBuilder. validateComputedColumn - Now provided by ComputedColumnValidator. validateIndexPredicate - Now provided by IndexPredicateValidator. In addition, several helper functions were refactored into the new package, including generateMaybeDuplicateNameForCheckConstraint, generateNameForCheckConstraint, iterColDescriptorsInExpr, and replaceVars. dummyColumnItems were also moved into the new package. Release note: None Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 8445a27 - Browse repository at this point
Copy the full SHA 8445a27View commit details -
rowcontainer,rowexec: switch joinReader to use DiskBackedNumberedRowC…
…ontainer Additionally, - added a randomized correctness test that compares the results of the indexed and numbered containers. - added benchmark cases to the joinReader benchmark that limit memory. None of the workloads have repeated reads of the same right row and all access the right rows in monotonically increasing order so the difference between the two containers is due to the numbered container avoiding the overhead of populating the cache. - reduced the number of allocations in newNumberedDiskRowIterator. - the accesses slices share the same underlying slice. - a row copy, when there is a miss and the row is not added to the cache, is eliminated. When a copy is needed and we have evicted a row from the cache, the copying reuses that evicted row. - allocations of the map, map elements are reused. Fixes cockroachdb#48118 Release note: None
Configuration menu - View commit details
-
Copy full SHA for f39ae24 - Browse repository at this point
Copy the full SHA f39ae24View commit details -
vendor: bump Pebble to 4887c526300055e1c30635c53fd16b3fe9d9e132
4887c526 vfs: return nil from MemFS.RemoveAll if parent doesn't exist Prerequistite for cockroachdb#49717. Release note: None
Configuration menu - View commit details
-
Copy full SHA for f268062 - Browse repository at this point
Copy the full SHA f268062View commit details -
49276: roachtest/jobs: better failed test reporting r=spaskob a=spaskob When `jobs/mixed-versions` test fails it is useful to report on what mixed version configuration it failed. We also test for failed jobs in between each node upgrade to catch failures as quickly as possible. Release note: none. Co-authored-by: Spas Bojanov <spas@cockroachlabs.com>
craig[bot] and Spas Bojanov committedJun 1, 2020 Configuration menu - View commit details
-
Copy full SHA for d78e667 - Browse repository at this point
Copy the full SHA d78e667View commit details -
49593: build: Upgrade base image to deployment dockerfile r=bobvawter a=bobvawter This change updates the deployment base image from Debian 9.8 to 9.12. Fixes: cockroachdb#41390 Release note (build change): Release Docker images are now built on Debian 9.12. Co-authored-by: Bob Vawter <bob@vawter.org>
Configuration menu - View commit details
-
Copy full SHA for 2f77a6d - Browse repository at this point
Copy the full SHA 2f77a6dView commit details -
Merge cockroachdb#49648 cockroachdb#49669 cockroachdb#49769
49648: kvclient: assorted cleanups to the range desc cache r=andreimatei a=andreimatei See individual commits. 49669: rowcontainer,rowexec: switch joinReader to use DiskBackedNumberedRowC… r=sumeerbhola a=sumeerbhola …ontainer Additionally, - added a randomized correctness test that compares the results of the indexed and numbered containers. - added benchmark cases to the joinReader benchmark that limit memory. None of the workloads have repeated reads of the same right row and all access the right rows in monotonically increasing order so the difference between the two containers is due to the numbered container avoiding the overhead of populating the cache. - reduced the number of slice allocations in newNumberedDiskRowIterator. Fixes cockroachdb#48118 Release note: None 49769: sql/physicalplan: small refactor r=andreimatei a=andreimatei This patch lifts the concern about consistently resolving spans from a single range to the same node up to the spanResolver, from one of the replica oracles. Regardless of what oracle is used (i.e. the lease-holder aware one, or the follower-reads one), making a single node in charge of all the spans that fall within one range seems like a good idea. Currently only the leaseholder oracle was doing this. This makes SpanResolver.ReplicaInfo() idempotent. Co-authored-by: Andrei Matei <andrei@cockroachlabs.com> Co-authored-by: sumeerbhola <sumeer@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for daf0418 - Browse repository at this point
Copy the full SHA daf0418View commit details -
opt: introduce VolatilitySet logical property
This commit adds a VolatilitySet logical property that will replace CanHaveSideEffects. For now, the property is not used for anything. The property reflects the volatility of any functions; future changes will incorporate the volatility of casts and scalar operators. The reason for why we need a set (rather than the "max" volatility) is that we will need to reoptimize a cached expression if it contains stable operators (regardless of whether it also contains volatile operators). Release note: None
Configuration menu - View commit details
-
Copy full SHA for 107c2ae - Browse repository at this point
Copy the full SHA 107c2aeView commit details -
49725: opt: introduce VolatilitySet logical property r=RaduBerinde a=RaduBerinde This commit adds a `VolatilitySet` logical property that will replace `CanHaveSideEffects`. For now, the property is not used for anything. The property reflects the volatility of any functions; future changes will incorporate the volatility of casts and scalar operators. The reason for why we need a set (rather than the "max" volatility) is that we will need to reoptimize a cached expression if it contains stable operators (regardless of whether it also contains volatile operators). Release note: None Co-authored-by: Radu Berinde <radu@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 8dc2cfd - Browse repository at this point
Copy the full SHA 8dc2cfdView commit details -
A DistSender test seems to have been left over from a different time - a time when presumably the responsibilities of the DistSender and the transport were split differently than today. This test was no longer making a lot of sense because some subtests where generating SendErrors in the transport, which is not a thing. Release note: None
Configuration menu - View commit details
-
Copy full SHA for b4d3f76 - Browse repository at this point
Copy the full SHA b4d3f76View commit details -
roachpb: non-nullable field in error
RangeKeyMismatchError.MismatchedRange is always set, even though it's nullable. Client code was inconsistently protecting against it being nil. This patch makes the field non-nullable, to clarify the situation. Release note: None
Configuration menu - View commit details
-
Copy full SHA for e813aee - Browse repository at this point
Copy the full SHA e813aeeView commit details -
49627: opt: add rule to fold two grouping operators into one r=DrewKimball a=DrewKimball Previously, the optimizer could not fold two grouping operators into a single equivalent grouping operator. This patch adds a rule that effects this transformation under the following conditions: 1. All of the outer aggregates are aggregating on the output columns of the inner aggregates. 2. Every inner-outer aggregation pair can be replaced with an equivalent single aggregate. 3. The inner grouping columns functionally determine the outer grouping columns. 4. Both grouping operators are unordered. As an example, the following query pairs are equivalent: ``` SELECT sum(t) FROM (SELECT sum(b) FROM ab GROUP BY a) AS g(t); SELECT sum(b) FROM ab; SELECT max(t) FROM (SELECT max(b) FROM ab GROUP BY a) AS g(t); SELECT max(b) FROM ab; SELECT sum_int(t) FROM (SELECT count(b) FROM ab GROUP BY a) AS g(t); SELECT count(b) FROM ab; ``` This situation is rare in direct SQL queries, but can arise when composing views and queries. Release note (sql change): The optimizer can now fold two grouping operators together when they are aggregating over functions like sum. Co-authored-by: Drew Kimball <drewk@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for d7a0a90 - Browse repository at this point
Copy the full SHA d7a0a90View commit details -
sql: add volatility information for operators
This change adds volatility information for unary, comparison, and binary operators. A subset of comparison operators are leak-proof; a small set of operators (mostly those involving the current timezone) are stable; and the rest are immutable. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 1a7735b - Browse repository at this point
Copy the full SHA 1a7735bView commit details
Commits on Jun 2, 2020
-
Merge cockroachdb#49711 cockroachdb#49754
49711: encoding: define EncodeTableKey / DecodeTableKey for Geo types r=rohany a=otan For GROUP BY to work, there needs to be a way of encoding / decoding table keys for each column type. As such, define EncodeTableKey and DecodeTableKey for geo types to be their underlying protobuf representation (we currently have no better way!). Note that this does not yet allow us to support PKs for geospatial (we could, but we need some more iteration beforehand). I've shifted the array descriptor markers around as they seem to be new in 20.2 as well -- so they should be stable in the upcoming alpha release. Release note: None 49754: geo/geomfn: accelerate spatial predicates with bounding box checks r=sumeerbhola a=otan We are able to accelerate certain spatial predicates by determining earlier on whether their bounding boxes intersect. This PR applies that change. It is worth noting that I have not applied this to geogfn yet, as the bounding box for geogfns should be 3D. That's a biggish change that I will leave for a follow up PR. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for d652d65 - Browse repository at this point
Copy the full SHA d652d65View commit details -
sql: store partial index predicate in index descriptor
With this commit, serialized partial index predicates are now stored on index descriptors. Predicates are dequalified so that database and table names are not included in column references. Release note: None
Configuration menu - View commit details
-
Copy full SHA for c32f8a2 - Browse repository at this point
Copy the full SHA c32f8a2View commit details -
49772: vendor: bump Pebble to 4887c526300055e1c30635c53fd16b3fe9d9e132 r=jbowens a=jbowens 4887c526 vfs: return nil from MemFS.RemoveAll if parent doesn't exist Prerequistite for cockroachdb#49717. Release note: None Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for f0c1774 - Browse repository at this point
Copy the full SHA f0c1774View commit details -
geo/geomfn: use GEOSRelatePattern for ST_ContainsProperly
GEOSRelatePattern seems to avoid the checks for wildcards, making ST_ContainsProperly faster. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 26d5952 - Browse repository at this point
Copy the full SHA 26d5952View commit details -
49759: geo/geomfn: use GEOSRelatePattern for ST_ContainsProperly r=sumeerbhola a=otan GEOSRelatePattern seems to avoid the checks for wildcards, making ST_ContainsProperly faster. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 9d1efbd - Browse repository at this point
Copy the full SHA 9d1efbdView commit details -
opt: add rule to eliminate a do-nothing join under a GroupBy
Previously, a join under a grouping operator could not be eliminated even when doing so would not affect the output of the grouping operator. This patch introduces two rules that match grouping operators (Groupby, ScalarGroupBy and DistinctOn) that take a LeftJoin or InnerJoin as input. The join is removed if the following conditions are met: 1. Only columns from the left (or right) input are being used. 2. It can be statically proven that removal of the join will not effect the output of the grouping operator. ``` CREATE TABLE xy (x INT PRIMARY KEY, y INT); CREATE TABLE uv (u INT PRIMARY KEY, v INT); SELECT x, sum(y) FROM xy LEFT JOIN uv ON x=u GROUP BY x; => SELECT x, sum(y) FROM xy GROUP BY x; ``` Fixes cockroachdb#49141 Release note (sql change): The optimizer can now eliminate an unnecessary join that is the input to a GroupBy operator.
Configuration menu - View commit details
-
Copy full SHA for 9436368 - Browse repository at this point
Copy the full SHA 9436368View commit details -
geo/geomfn: implement ST_Buffer
Implemented ST_Buffer. Had to write our own parsing logic to make it compatible with C. Also ran clang-format and fixed up a few bad infos. Release note (sql change): Implement ST_Buffer for geometry and string variants.
Configuration menu - View commit details
-
Copy full SHA for bf1245e - Browse repository at this point
Copy the full SHA bf1245eView commit details -
49722: geo/geomfn: implement ST_Buffer r=summerbhola a=otan Implemented ST_Buffer. Had to write our own parsing logic to make it compatible with C. Also ran clang-format and fixed up a few bad infos. Resolves cockroachdb#48890 Resolves cockroachdb#48891 Resolves cockroachdb#48802 Resolves cockroachdb#48803 Resolves cockroachdb#48804 Release note (sql change): Implement ST_Buffer for geometry and string variants. Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for e80ccc5 - Browse repository at this point
Copy the full SHA e80ccc5View commit details -
sql: add the
crdb_internal.create_type_statements
tableThis PR adds the `crdb_internal.create_type_statments` table, which contains create statements for user defined types. This will be used when generating create statments for `cockroach dump` in cockroachdb#47765. This PR additionally refactors and deduplicates some code that iterates over all user defined type descriptors in other virtual tables. Release note (sql change): Add the `crdb_internal.create_type_statements` virtual table. It holds create statements for user defined types.
Configuration menu - View commit details
-
Copy full SHA for 0bcddba - Browse repository at this point
Copy the full SHA 0bcddbaView commit details -
49683: opt: add rule to eliminate a do-nothing join under a GroupBy r=DrewKimball a=DrewKimball Previously, a join under a grouping operator could not be eliminated even when doing so would not effect the output of the grouping operator. This patch introduces two rules that match grouping operators (Groupby, ScalarGroupBy and DistinctOn) that take a LeftJoin or InnerJoin as input. The join is removed if the following conditions are met: 1. Only columns from the left (or right) input are being used. 2. It can be statically proven that removal of the join will not effect the output of the grouping operator. ``` CREATE TABLE xy (x INT PRIMARY KEY, y INT); CREATE TABLE uv (u INT PRIMARY KEY, v INT); SELECT x, sum(y) FROM xy LEFT JOIN uv ON x=u GROUP BY x; => SELECT x, sum(y) FROM xy GROUP BY x; ``` Fixes cockroachdb#49141 Release note (sql change): The optimizer can now eliminate an unnecessary join that is the input to a GroupBy operator. Co-authored-by: Drew Kimball <andrewekimball@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for d65b224 - Browse repository at this point
Copy the full SHA d65b224View commit details -
backupccl: fix bug when backing up dropped tables with revision history
When performing an incremental backup with revision history, we want to include all spans that were public at any point during the latest interval under consideration (the time between the last backup and when you are performing the incremental backup). However, consider a table that was dropped before the interval started. The table's descriptor may still be visible (in the DROPPED state). We should not be interested in the spans for this database. So, when going through the list of revisions to table descriptors, we should make sure that the table in question was not DROPPED at some point during this interval. To see why this is needed, consider the following scenario (all backups are assumed to be taken with revision_history): - Create table mydb.a - Create table mydb.b - Drop table mydb.a - Take a backup of mydb (full) - Take an incremental backup (inc) of mydb - Create table mydb.c - Take another incremental backup (inc2) of mydb The backup "inc" and "inc2" should not be considered as backing up table "mydb.a", since it has been dropped at that point. Note that since "inc" does not see any descriptor changes, only "mydb.b" is included in its backup. However, previously, "inc2" would see a table descriptor for "mydb.a" (even though it is dropped) and include it in the set of spans included in "inc2". This is an issue since "inc" did not include this span, and thus there is a gap in the coverage for this dropped table. Release note (bug fix): There was a bug where when performing incremental backups with revision history on a database (or full cluster) and a table in the database you were backing up was dropped and then other tables were lated create the backup would return an error. This is now fixed.
Configuration menu - View commit details
-
Copy full SHA for 692b50a - Browse repository at this point
Copy the full SHA 692b50aView commit details -
geo/geomfn: implement ST_LineInterpolatePoint(s)
Fixes cockroachdb#48971 Fixes cockroachdb#48972 This PR adds following builtin functions * ST_LineInterpolatePoint{{geometry, float8}} * ST_LineInterpolatePoints{{geometry, float8, bool}} which works for LineString only, allows us to determine one or more interpolated points in the LineString which at an integral multiple of given fraction of LineString's total length. Release note (sql change): This PR implement adds the following built-in functions. * ST_LineInterpolatePoint{{geometry, float8}} * ST_LineInterpolatePoints{{geometry, float8, bool}}
Configuration menu - View commit details
-
Copy full SHA for cf81e9f - Browse repository at this point
Copy the full SHA cf81e9fView commit details -
49742: geo/geomfn: implement ST_LineInterpolatePoint(s) r=otan a=abhishek20123g Fixes cockroachdb#48971 Fixes cockroachdb#48972 This PR adds following builtin functions * ST_LineInterpolatePoint{{geometry, float8}} * ST_LineInterpolatePoints{{geometry, float8, bool}} which works for LineString only, allows us to determine one or more interpolated points in the LineString which at an integral multiple of given fraction of LineString's total length. Release note (sql change): This PR implement adds the following built-in functions. * ST_LineInterpolatePoint{{geometry, float8}} * ST_LineInterpolatePoints{{geometry, float8, bool}} Co-authored-by: abhishek20123g <abhishek20123g@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for cbe7fdc - Browse repository at this point
Copy the full SHA cbe7fdcView commit details -
49592: ui: Error message for non-table data on Databases page r=koorosh a=koorosh Resolves one part of issue cockroachdb#48152 Previously, error messages received on request for non-table data on Databases page is not handled and not displayed for users. Now, default error message is shown in response to errors. Release note (admin ui change): Show default error message about restricted permissions for non-admin users on Databases page. Test cases to validate correct behavior: - Databases page loads table info for a user with `admin` role - Databases page shows the default error message with `insufficient rights` message for non-admin user. - The loading indicator is displayed while response is not received. Before <img width="997" alt="before" src="https://user-images.githubusercontent.com/3106437/83411207-70681700-a420-11ea-96d8-7664d91297aa.png"> After <img width="1133" alt="after" src="https://user-images.githubusercontent.com/3106437/83411218-75c56180-a420-11ea-9fe5-766338e815ba.png"> Co-authored-by: Andrii Vorobiov <and.vorobiov@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 42eb2d1 - Browse repository at this point
Copy the full SHA 42eb2d1View commit details -
builtins: add missing UnwrapDatum call and change panics to error
getNameForArg does not handle tree.DOidWrappers. The assumption in this code is that the datum gets unwrapped before calling this code. This commit adds a missing UnwrapDatum call and changes an unhandled type panic to an error instead. Release note (bug fix): a panic observed as "unexpected arg type tree.DOidWrapper" has been fixed.
Configuration menu - View commit details
-
Copy full SHA for cd2bc89 - Browse repository at this point
Copy the full SHA cd2bc89View commit details -
49601: builtins: add missing UnwrapDatum call and change panics to error r=yuzefovich,jordanlewis a=asubiotto getNameForArg does not handle tree.DOidWrappers. The assumption in this code is that the datum gets unwrapped before calling this code. This commit adds a missing UnwrapDatum call and changes an unhandled type panic to an error instead. Release note (bug fix): a panic observed as "unexpected arg type tree.DOidWrapper" has been fixed. Closes cockroachdb#43166 Co-authored-by: Alfonso Subiotto Marques <alfonso@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 1be1ec0 - Browse repository at this point
Copy the full SHA 1be1ec0View commit details -
Configuration menu - View commit details
-
Copy full SHA for 151f128 - Browse repository at this point
Copy the full SHA 151f128View commit details -
Configuration menu - View commit details
-
Copy full SHA for 97e4d75 - Browse repository at this point
Copy the full SHA 97e4d75View commit details -
49771: sql: add the `crdb_internal.create_type_statements` table r=otan a=rohany This PR adds the `crdb_internal.create_type_statments` table, which contains create statements for user defined types. This will be used when generating create statments for `cockroach dump` in cockroachdb#47765. This PR additionally refactors and deduplicates some code that iterates over all user defined type descriptors in other virtual tables. Release note (sql change): Add the `crdb_internal.create_type_statements` virtual table. It holds create statements for user defined types. Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for b4b7d45 - Browse repository at this point
Copy the full SHA b4b7d45View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7c5360f - Browse repository at this point
Copy the full SHA 7c5360fView commit details -
Configuration menu - View commit details
-
Copy full SHA for b424bbd - Browse repository at this point
Copy the full SHA b424bbdView commit details -
Configuration menu - View commit details
-
Copy full SHA for 06de45a - Browse repository at this point
Copy the full SHA 06de45aView commit details -
server: use "read-only" Gossip for tenants
We were previously using the Gossip instance of the TestServer against which the tenant was initialized. This commit trims the dependency further by initializing its own Gossip instance which is never written to (i.e. `AddInfo` is never called) and which does not accept incoming connections. As a reminder, the remaining problematic uses of Gossip as of this commit are: - making a `nodeDialer` (for `DistSender`), tracked in: cockroachdb#47909 - access to the system config: - `(schemaChangeGCResumer).Resume`, tracked: cockroachdb#49691 - `database.Cache`, tracked: cockroachdb#49692 - `(physicalplan).spanResolver` (for replica oracle). This is likely not a blocker as we can use a "dumber" oracle in this case; the oracle is used for distsql physical planning of which tenants will do none. Tracked in: cockroachdb#48432 Release note: None
Configuration menu - View commit details
-
Copy full SHA for 4d036c3 - Browse repository at this point
Copy the full SHA 4d036c3View commit details -
server: make cli-ready startTenant method()
The goal is to turn `testSQLServerArgs` into something that can be re-used to introduce a cli command for starting a SQL tenant. This commit takes a step in that direction by lifting some testing-specific inputs up. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 72cd586 - Browse repository at this point
Copy the full SHA 72cd586View commit details -
Configuration menu - View commit details
-
Copy full SHA for 76eed9c - Browse repository at this point
Copy the full SHA 76eed9cView commit details -
Configuration menu - View commit details
-
Copy full SHA for d94b356 - Browse repository at this point
Copy the full SHA d94b356View commit details -
Configuration menu - View commit details
-
Copy full SHA for 5f073b8 - Browse repository at this point
Copy the full SHA 5f073b8View commit details -
sql: remove separate scanVisilibity struct
This commit removes `sql.scanVisibility` in favor of protobuf-generated `execinfrapb.ScanVisibility`. It also introduces prettier aliases for the two values into `execinfra` package that are now used throughout the code. Release note: None
Configuration menu - View commit details
-
Copy full SHA for b0ccebb - Browse repository at this point
Copy the full SHA b0ccebbView commit details -
sql: clean up of scan node and a few other things
This commit does the following cleanups of `scanNode`: 1. refactors `scanNode.initCols` method to be standalone (it will probably be reused later by distsql spec exec factory). 2. removes `numBackfillColumns`, `specifiedIndexReverse`, and `isSecondaryIndex` fields since they are no longer used. 3. refactors the code to remove `valNeededForCols` field which was always consecutive numbers in range `[0, len(n.cols)-1]`. 4. refactors `getIndexIdx` method to not depend on `scanNode`. Additionally, this commit removes `planDependencies` business from `planTop` since optimizer now handles CREATE VIEW and handles the plan dependencies on its own (and CREATE VIEW was the single user of that struct in the plan top). Also, it removes (which seems like) unnecessary call to `planColumns` when creating distinct spec and an unused parameter from `createTableReaders` method. Release note: None
Configuration menu - View commit details
-
Copy full SHA for e2ac346 - Browse repository at this point
Copy the full SHA e2ac346View commit details -
Merge cockroachdb#49693 cockroachdb#49724
49693: server: use "read-only" Gossip for tenants r=asubiotto,nvanbenschoten a=tbg We were previously using the Gossip instance of the TestServer against which the tenant was initialized. This commit trims the dependency further by initializing its own Gossip instance which is never written to (i.e. `AddInfo` is never called) and which does not accept incoming connections. As a reminder, the remaining problematic uses of Gossip as of this commit are: - making a `nodeDialer` (for `DistSender`), tracked in: cockroachdb#47909 - access to the system config: - `(schemaChangeGCResumer).Resume`, tracked: cockroachdb#49691 - `database.Cache`, tracked: cockroachdb#49692 - `(physicalplan).spanResolver` (for replica oracle). This is likely not a blocker as we can use a "dumber" oracle in this case; the oracle is used for distsql physical planning of which tenants will do none. Tracked in: cockroachdb#48432 cc @ajwerner Release note: None 49724: sql: clean up of scanNode and some other things r=yuzefovich a=yuzefovich **sql: unify PlanningCtx constructors into one** Release note: None **sql: remove separate scanVisilibity struct** This commit removes `sql.scanVisibility` in favor of protobuf-generated `execinfrapb.ScanVisibility`. It also introduces prettier aliases for the two values into `execinfra` package that are now used throughout the code. Release note: None **sql: clean up of scan node and a few other things** This commit does the following cleanups of `scanNode`: 1. refactors `scanNode.initCols` method to be standalone (it will probably be reused later by distsql spec exec factory). 2. removes `numBackfillColumns`, `specifiedIndexReverse`, and `isSecondaryIndex` fields since they are no longer used. 3. refactors the code to remove `valNeededForCols` field which was always consecutive numbers in range `[0, len(n.cols)-1]`. 4. refactors `getIndexIdx` method to not depend on `scanNode`. Additionally, this commit removes `planDependencies` business from `planTop` since optimizer now handles CREATE VIEW and handles the plan dependencies on its own (and CREATE VIEW was the single user of that struct in the plan top). Also, it removes (which seems like) unnecessary call to `planColumns` when creating distinct spec and an unused parameter from `createTableReaders` method. Addresses: cockroachdb#47474. Release note: None Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 3b2245f - Browse repository at this point
Copy the full SHA 3b2245fView commit details -
colexec: add support for Concat binary operator
Previously, the vectorized engine lacked support for concat binary operator. This commit added this support. I implemented concat for bytes referrering to same operator for row-oriented egine. Since there was already a "getBinOpAssignFunc" implemented for "datumCustomizer", I just registered the output type for Concat operator for datum type, so that Concat operator for datum type would be generated as expected. Release note (sql change): Vectorized execution engine now supports "Concat" ("||") operator.
Configuration menu - View commit details
-
Copy full SHA for d13a496 - Browse repository at this point
Copy the full SHA d13a496View commit details -
Configuration menu - View commit details
-
Copy full SHA for e29011d - Browse repository at this point
Copy the full SHA e29011dView commit details -
Merge cockroachdb#49764 cockroachdb#49778 cockroachdb#49793
49764: sql: add volatility information for operators r=RaduBerinde a=RaduBerinde This change adds volatility information for unary, comparison, and binary operators. A subset of comparison operators are leak-proof; a small set of operators (mostly those involving the current timezone) are stable; and the rest are immutable. Release note: None 49778: roachpb: non-nullable field in error r=andreimatei a=andreimatei RangeKeyMismatchError.MismatchedRange is always set, even though it's nullable. Client code was inconsistently protecting against it being nil. This patch makes the field non-nullable, to clarify the situation. Release note: None 49793: Adding myself to AUTHORS r=adityamaru a=adityamaru Co-authored-by: Radu Berinde <radu@cockroachlabs.com> Co-authored-by: Andrei Matei <andrei@cockroachlabs.com> Co-authored-by: Aditya Maru <adityamaru@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 2bfdd74 - Browse repository at this point
Copy the full SHA 2bfdd74View commit details -
workload/schemachange: classify schema change errors
Previously this workload will not fail even when a schema change operation returns an error. Now we extract the SQLSTATE code from the error and if it is of class `09`, i.e. Triggered Action Exception or `XX`, i.e. Internal Error we report the error to the workload runner which will fail if `tolerate-errors=false`. This PR also refactors the `runInTxn` and `run` functions. `runInTxn` now handles txn rollback upon first error encountered and marks the return errors as fatal, meaning the workload should be aborted or rollback when rollback was performed. Additionally schema change operations are times in `runInTxn` without counting the time for producing a random operation since random operation creation may require a few failed attempts. Release note: none.
Spas Bojanov committedJun 2, 2020 Configuration menu - View commit details
-
Copy full SHA for 315ed42 - Browse repository at this point
Copy the full SHA 315ed42View commit details -
roachtest/schemachange: specify concurrency
This is a small refactor so that the roachtest can specify the concurrency in addition to the max_ops for the schemachange workload Release note: none.
Spas Bojanov committedJun 2, 2020 Configuration menu - View commit details
-
Copy full SHA for 642c03e - Browse repository at this point
Copy the full SHA 642c03eView commit details -
roachtest/schemachange: random operations test
This PR adds a simple roachtest that runs the schemachange workload on a 3-node cluster. The purpose is to detect any regression in the error reporting of schema changes. Since the workload still discovers failing errors that will take some time to be fixed the workload can be run with option `tolerate-errors=true` to prevent aborting on these errors. Release note: none.
Spas Bojanov committedJun 2, 2020 Configuration menu - View commit details
-
Copy full SHA for f0c40a0 - Browse repository at this point
Copy the full SHA f0c40a0View commit details -
Release note (bug fix): If `ctx.err()` is nil the cli will panic when a workload returns an error.
Spas Bojanov committedJun 2, 2020 Configuration menu - View commit details
-
Copy full SHA for 02aa0c9 - Browse repository at this point
Copy the full SHA 02aa0c9View commit details -
Merge cockroachdb#49453 cockroachdb#49758
49453: sql: store partial index predicate in index descriptor r=mgartner a=mgartner With this commit, serialized partial index predicates are now stored on index descriptors. Predicates are dequalified so that database and table names are not included in column references. Release note: None 49758: colexec: add support for Concat binary operator r=yuzefovich a=yongyanglai Previously, the vectorized engine lacked support for concat binary operator. This commit added this support. I implemented concat for bytes referrering to same operator for row-oriented egine. Since there was already a "getBinOpAssignFunc" implemented for "datumCustomizer", I just registered the output type for Concat operator for datum type, so that Concat operator for datum type would be generated as expected. Resolves: cockroachdb#49466 See also: cockroachdb#49463 Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com> Co-authored-by: Yongyang Lai <m@yylai.xyz>
Configuration menu - View commit details
-
Copy full SHA for 455a67b - Browse repository at this point
Copy the full SHA 455a67bView commit details -
sql: add stub exec.Factory for opt-driven distsql planning
This commit adds a stub implementation of `exec.Factory` interface that will be creating DistSQL processor specs directly from `opt.Expr`, sidestepping intermediate `planNode` phase. It also introduces a new private cluster setting "sql.defaults.experimental_distsql_planning" as well as a session variable "experimental_distsql_planning" which determine whether the new factory is used, set to `off` by default (other options are `on` and `always` - the latter is only for the session variable). `Off` planning mode means using the old code path, `on` means attempting to use the new code path but falling back to the old one if we encounter an error, and `always` means using only the new code path and do not fallback in case of an error. Currently the fallback doesn't occur with `always` only for SELECT statements (so that we could run other statements types, like SET), meaning that when`always` option is used, if we encounter an unsupported node while planning a SELECT statement, an error is returned, but if we encounter an unsupported node while planning a statement other than SELECT, we still fallback to the old code (in a sense `always` behaves exactly like `on` for all statement types except for SELECTs). Release note: None
Configuration menu - View commit details
-
Copy full SHA for 652821e - Browse repository at this point
Copy the full SHA 652821eView commit details -
sql: add plumbing for phys plan creation directly in execbuilder
This commit introduces `planMaybePhysical` utility struct that represents a plan and uses either planNode ("logical") or DistSQL spec ("physical") representations. It will be removed once we are able to create all processor specs in execbuilder directly. This struct has been plumbed in all places that need to look at the plan, but in most of them only the logical representation is supported. However, the main codepath for executing a statement supports both, and if physical representation is used, then we bypass distsql physical planner. This commit also renames `checkSupportForNode` to `checkSupportForPlanNode` and `createPlanForNode` to `createPhysPlanForPlanNode` to make their purpose more clear. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 2b69f7e - Browse repository at this point
Copy the full SHA 2b69f7eView commit details -
sql: operate on pointers to PhysicalPlan
This commit changes `createPlanFor*` methods to operate on pointers to `PhysicalPlan` instead of values. It also renames and cleans up some explain-related utility methods. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 5fdc7cc - Browse repository at this point
Copy the full SHA 5fdc7ccView commit details -
Merge cockroachdb#48338 cockroachdb#49348
48338: workload/schemachange: classify schema change errors r=spaskob a=spaskob Previously this workload will not fail even when a schema change operation returns an error. Now we extract the SQLSTATE code from the error and if it is of class `09`, i.e. Triggered Action Exception or `XX`, i.e. Internal Error we abort the workload and report the error. Also previously we aborted the txn at the first operation error, now we continue unless the error is one of the classes above. We also add a roachtest `schemachange/random-load` that runs the workload and fails if unclassified errors are encountered. Fixes cockroachdb#23286. Release note: none. 49348: sql: distsql spec creation in execbuilder plumbing r=yuzefovich a=yuzefovich **sql: add stub exec.Factory for opt-driven distsql planning** This commit adds a stub implementation of `exec.Factory` interface that will be creating DistSQL processor specs directly from `opt.Expr`, sidestepping intermediate `planNode` phase. It also introduces a new private cluster setting "sql.defaults.experimental_distsql_planning" as well as a session variable "experimental_distsql_planning" which determine whether the new factory is used, set to `off` by default (other options are `on` and `always` - the latter is only for the session variable). `Off` planning mode means using the old code path, `on` means attempting to use the new code path but falling back to the old one if we encounter an error, and `always` means using only the new code path and do not fallback in case of an error. Currently the fallback doesn't occur with `always` only for SELECT statements (so that we could run other statements types, like SET), meaning that when`always` option is used, if we encounter an unsupported node while planning a SELECT statement, an error is returned, but if we encounter an unsupported node while planning a statement other than SELECT, we still fallback to the old code (in a sense `always` behaves exactly like `on` for all statement types except for SELECTs). Release note: None **sql: add plumbing for phys plan creation directly in execbuilder** This commit introduces `planMaybePhysical` utility struct that represents a plan and uses either planNode ("logical") or DistSQL spec ("physical") representations. It will be removed once we are able to create all processor specs in execbuilder directly. This struct has been plumbed in all places that need to look at the plan, but in most of them only the logical representation is supported. However, the main codepath for executing a statement supports both, and if physical representation is used, then we bypass distsql physical planner. This commit also renames `checkSupportForNode` to `checkSupportForPlanNode` and `createPlanForNode` to `createPhysPlanForPlanNode` to make their purpose more clear. Addresses: cockroachdb#47474. Release note: None **sql: operate on pointers to PhysicalPlan** This commit changes `createPlanFor*` methods to operate on pointers to `PhysicalPlan` instead of values. It also renames and cleans up some explain-related utility methods. Release note: None Co-authored-by: Spas Bojanov <spas@cockroachlabs.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 464e5c4 - Browse repository at this point
Copy the full SHA 464e5c4View commit details -
changefeedccl: change default flush interval to 5s
We observed a customer cluster's changefeeds to cloud storage 'getting stuck' which on further investigation was determined to be happening because they were spending too much time in flushing. This was because they were writing to a cloud sink and the default flush interval of 200ms (poller interval of 1s / 5) meant it spent all of its time flushing. This default was picked testing with lower-latency sinks and was noted in a comment as somewhat arbitrary. This change increases the default to 5s. Users who truely desire lower latency can of course specify their own 'resolved' interval, so this change in the default is for those that are indifferent, and increasing the latency to 5s reduces the chance of hiitting this unfortunate edge case when the sink is too slow. Release note (enterprise change): The default flush interval for changefeeds that do not specify a 'resolved' option is now 5s instead of 200ms to more gracefully handle higher-latency sinks.
Configuration menu - View commit details
-
Copy full SHA for 848abda - Browse repository at this point
Copy the full SHA 848abdaView commit details -
Configuration menu - View commit details
-
Copy full SHA for 31a52c0 - Browse repository at this point
Copy the full SHA 31a52c0View commit details -
Merge cockroachdb#47502 cockroachdb#49807
47502: geoviz: add a visualization tool for geospatial data r=sumeerbhola a=otan This PR adds a visualization tool for looking at geography and S2 data. It's pretty rudimentary right now, but will help immensely with visualizing and explaining concepts especially for the RFC. Release note: None ![image](https://user-images.githubusercontent.com/3646147/79398843-8421f180-7f36-11ea-8f69-d18896145bde.png) 49807: adding self to authors r=helenmhe a=helenmhe Co-authored-by: Oliver Tan <otan@cockroachlabs.com> Co-authored-by: Helen He <helenhe.mit@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for b254299 - Browse repository at this point
Copy the full SHA b254299View commit details -
sql: adds unit tests for schemaexpr.CheckConstraintBuilder
This commit adds unit tests to the previously untested functions of `schemaexpra.CheckConstraintBuilder.DefaultName`. It also adds a `testutils.go` file to the `schemaexpr` package with a function that makes it easier to create table descriptors that can be used in tests. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 516d593 - Browse repository at this point
Copy the full SHA 516d593View commit details -
changefeedccl: Treat node draining errors as retryable.
Handle flow registration errors due to draining node as retryable. Release notes (reliability): Treat errors due to draining nodes as retryable when starting CDC.
Yevgeniy Miretskiy committedJun 2, 2020 Configuration menu - View commit details
-
Copy full SHA for 30fe550 - Browse repository at this point
Copy the full SHA 30fe550View commit details -
49777: sql: adds unit tests for schemaexpr.CheckConstraintBuilder r=mgartner a=mgartner This commit adds unit tests to the previously untested functions of `schemaexpra.CheckConstraintBuilder.DefaultName`. It also adds a `testutils.go` file to the `schemaexpr` package with a function that makes it easier to create table descriptors that can be used in tests. Release note: None Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 31aa222 - Browse repository at this point
Copy the full SHA 31aa222View commit details -
roachpb: refuse nil desc in NewRangeKeyMismatchError
Since recently RangeKeyMismatchError does not support nil descriptors, but it still had code that pretended to deal with nils (even though a nil would have exploded a bit later). Only one test caller was passing a nil, and it turns out that was dead code. Release note: None
Configuration menu - View commit details
-
Copy full SHA for e3bd79d - Browse repository at this point
Copy the full SHA e3bd79dView commit details -
49743: cdc: Treat node draining errors as retryable. r=miretskiy a=miretskiy Fixes cockroachdb#46515 Fixes cockroachdb#43771 Handle flow registration errors due to draining node as retryable. Release notes (reliability): Treat errors due to draining nodes as retryable when starting CDC. Co-authored-by: Yevgeniy Miretskiy <yevgeniy@cockroachlabs.com>
craig[bot] and Yevgeniy Miretskiy committedJun 2, 2020 Configuration menu - View commit details
-
Copy full SHA for a18b32f - Browse repository at this point
Copy the full SHA a18b32fView commit details -
storage: Add rocksdb-vs-pebble benchmark for ExportToSst
As part of the investigation into cockroachdb#49710, this change adds a benchmark for ExportToSst that tests both RocksDB and Pebble. Here are some example runs (old = rocksdb, new = pebble): name old time/op new time/op delta ExportToSst/rocksdb/numKeys=64/numRevisions=1-12 43.9µs ± 3% 34.5µs ± 4% -21.33% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=64/numRevisions=10-12 281µs ± 3% 169µs ± 6% -39.89% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=64/numRevisions=100-12 1.82ms ±22% 1.17ms ± 1% -35.73% (p=0.000 n=10+9) ExportToSst/rocksdb/numKeys=512/numRevisions=1-12 212µs ± 6% 111µs ± 3% -47.77% (p=0.000 n=10+9) ExportToSst/rocksdb/numKeys=512/numRevisions=10-12 1.91ms ± 1% 1.19ms ± 8% -37.65% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=512/numRevisions=100-12 13.7ms ± 3% 10.1ms ±12% -26.21% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=1024/numRevisions=1-12 390µs ± 1% 215µs ±12% -44.94% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=1024/numRevisions=10-12 4.01ms ± 6% 2.40ms ±16% -40.13% (p=0.000 n=10+9) ExportToSst/rocksdb/numKeys=1024/numRevisions=100-12 27.9ms ± 2% 20.8ms ± 2% -25.48% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=8192/numRevisions=1-12 2.97ms ± 2% 1.42ms ± 5% -52.24% (p=0.000 n=9+10) ExportToSst/rocksdb/numKeys=8192/numRevisions=10-12 32.8ms ± 7% 19.1ms ± 3% -41.59% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=8192/numRevisions=100-12 224ms ± 3% 169ms ±25% -24.64% (p=0.000 n=9+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=1-12 23.7ms ± 4% 13.4ms ±20% -43.65% (p=0.000 n=9+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=10-12 264ms ± 4% 201ms ±24% -23.92% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=100-12 1.88s ± 6% 1.23s ± 8% -34.70% (p=0.000 n=10+8) Release note: None.
Configuration menu - View commit details
-
Copy full SHA for 16cbd80 - Browse repository at this point
Copy the full SHA 16cbd80View commit details
Commits on Jun 3, 2020
-
geo/geodist: speed up dwithin/distance operations with bbox
We can optimize distance checks by saying if their bounding boxes do not intersect then we only need to check the exteriors. This shaved 2-3s off a 7s query available in the PostGIS tutorial. Existing test cases exercise this functionality well. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 3d85fed - Browse repository at this point
Copy the full SHA 3d85fedView commit details -
49799: geo/geodist: speed up dwithin/distance operations with bbox r=sumeerbhola a=otan We can optimize distance checks by saying if their bounding boxes do not intersect then we only need to check the exteriors. This shaved 2-3s off a 7s query available in the PostGIS tutorial. Existing test cases exercise this functionality well. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for a51c5e3 - Browse repository at this point
Copy the full SHA a51c5e3View commit details -
sql: handle inserts for partial indexes
This commit makes INSERTs only write new entries to partial indexes when the new row satisfies the predicate expression of the partial index. In order to do this, the optimizer synthesizes a column for each partial index defined on the table that evaluates to true if the respective partial index should be written to. These columns are interpretted by the execution engine into a set of IndexIDs to not write to. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 1c3404a - Browse repository at this point
Copy the full SHA 1c3404aView commit details -
sql: serialize UDTs in expressions in a stable way
Fixes cockroachdb#49379. This PR ensures that serialized expressions stored durably in table descriptors are serialized in a format that is stable across changes to user defined types present in those expressions. An effect of this change is that these expressions must be reparsed and formatted in a human readable way before display in statements like `SHOW CREATE TABLE`. That work will be done in a follow up PR. Release note: None
Configuration menu - View commit details
-
Copy full SHA for b527d66 - Browse repository at this point
Copy the full SHA b527d66View commit details -
49800: roachtest: log on error in isAlive r=nvanbenschoten a=nvanbenschoten Closes cockroachdb#49358. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 69bc019 - Browse repository at this point
Copy the full SHA 69bc019View commit details -
opt: incorporate casts volatility
This change incorporates the new cast volatility information into the VolatilitySet property. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 6517319 - Browse repository at this point
Copy the full SHA 6517319View commit details -
sql: add support for ANALYZE <tablename>
This commit adds support for `ANALYZE <tablename>` by adding the statement as syntactic sugar for the equivalent command `CREATE STATISTICS "" FROM <tablename>`. This improves compatibility with Postgres, and is needed to run the PostGIS tutorial as written. Note that this commit does not add support for `ANALYZE` without a table name. We can add support for that and other variants later if needed, but it is not necessary for the PostGIS tutorial. Fixes cockroachdb#49214 Release note (sql change): Added support for `ANALYZE <tablename>`, which causes the database to collect statistics on the given table for use by the optimizer. The functionality of this command is equivalent to the existing command `CREATE STATISTICS "" FROM <tablename>`, but it increases compatibility with Postgres by using the same syntax that Postgres uses.
Configuration menu - View commit details
-
Copy full SHA for 3105d70 - Browse repository at this point
Copy the full SHA 3105d70View commit details -
Merge cockroachdb#49565 cockroachdb#49721 cockroachdb#49815
49565: sql: serialize UDTs in expressions in a stable way r=otan,jordanlewis a=rohany Fixes cockroachdb#49379. This PR ensures that serialized expressions stored durably in table descriptors are serialized in a format that is stable across changes to user defined types present in those expressions. An effect of this change is that these expressions must be reparsed and formatted in a human readable way before display in statements like `SHOW CREATE TABLE`. Release note: None 49721: storage: Add rocksdb-vs-pebble benchmark for ExportToSst r=itsbilal a=itsbilal As part of the investigation into cockroachdb#49710, this change adds a benchmark for ExportToSst that tests both RocksDB and Pebble. Here are some example runs without contention (old = rocksdb, new = pebble): name old time/op new time/op delta ExportToSst/rocksdb/numKeys=64/numRevisions=1-12 43.9µs ± 3% 34.5µs ± 4% -21.33% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=64/numRevisions=10-12 281µs ± 3% 169µs ± 6% -39.89% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=64/numRevisions=100-12 1.82ms ±22% 1.17ms ± 1% -35.73% (p=0.000 n=10+9) ExportToSst/rocksdb/numKeys=512/numRevisions=1-12 212µs ± 6% 111µs ± 3% -47.77% (p=0.000 n=10+9) ExportToSst/rocksdb/numKeys=512/numRevisions=10-12 1.91ms ± 1% 1.19ms ± 8% -37.65% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=512/numRevisions=100-12 13.7ms ± 3% 10.1ms ±12% -26.21% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=1024/numRevisions=1-12 390µs ± 1% 215µs ±12% -44.94% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=1024/numRevisions=10-12 4.01ms ± 6% 2.40ms ±16% -40.13% (p=0.000 n=10+9) ExportToSst/rocksdb/numKeys=1024/numRevisions=100-12 27.9ms ± 2% 20.8ms ± 2% -25.48% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=8192/numRevisions=1-12 2.97ms ± 2% 1.42ms ± 5% -52.24% (p=0.000 n=9+10) ExportToSst/rocksdb/numKeys=8192/numRevisions=10-12 32.8ms ± 7% 19.1ms ± 3% -41.59% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=8192/numRevisions=100-12 224ms ± 3% 169ms ±25% -24.64% (p=0.000 n=9+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=1-12 23.7ms ± 4% 13.4ms ±20% -43.65% (p=0.000 n=9+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=10-12 264ms ± 4% 201ms ±24% -23.92% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=100-12 1.88s ± 6% 1.23s ± 8% -34.70% (p=0.000 n=10+8) And some with contention=true: name old time/op new time/op delta ExportToSst/rocksdb/numKeys=65536/numRevisions=10/contention=true-12 362ms ± 7% 168ms ± 3% -53.60% (p=0.000 n=10+10) ExportToSst/rocksdb/numKeys=65536/numRevisions=100/contention=true-12 2.24s ± 6% 1.24s ±10% -44.50% (p=0.000 n=10+10) Release note: None. 49815: roachpb: refuse nil desc in NewRangeKeyMismatchError r=andreimatei a=andreimatei Since recently RangeKeyMismatchError does not support nil descriptors, but it still had code that pretended to deal with nils (even though a nil would have exploded a bit later). Only one test caller was passing a nil, and it turns out that was dead code. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu> Co-authored-by: Bilal Akhtar <bilal@cockroachlabs.com> Co-authored-by: Andrei Matei <andrei@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for c8808ad - Browse repository at this point
Copy the full SHA c8808adView commit details -
Configuration menu - View commit details
-
Copy full SHA for fa3286b - Browse repository at this point
Copy the full SHA fa3286bView commit details -
kvserver: TestReplicateQueueDownReplicate
This test was trying to upreplicate a range and then wait for the replication queue to downreplicate it. The test had rotted back when we switched the replication factor of system ranges from 3x to 5x; with 5x replication, the range started off as having 5 replicas so the upreplication step was unnecessary. Worse, the upreplication part was flaky, presumably because it was constantly racing with the replication queue trying to downreplicate the range (although I couldn't repro despite a lot of stress). Fixes cockroachdb#48284 Release note: None
Configuration menu - View commit details
-
Copy full SHA for 8221950 - Browse repository at this point
Copy the full SHA 8221950View commit details -
49812: kvserver: TestReplicateQueueDownReplicate r=andreimatei a=andreimatei This test was trying to upreplicate a range and then wait for the replication queue to downreplicate it. The test had rotted back when we switched the replication factor of system ranges from 3x to 5x; with 5x replication, the range started off as having 5 replicas so the upreplication step was unnecessary. Worse, the upreplication part was flaky, presumably because it was constantly racing with the replication queue trying to downreplicate the range (although I couldn't repro despite a lot of stress). Fixes cockroachdb#48284 Release note: None Co-authored-by: Andrei Matei <andrei@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for c87535a - Browse repository at this point
Copy the full SHA c87535aView commit details -
49671: sql: handle inserts for partial indexes r=mgartner a=mgartner This commit makes INSERTs only write new entries to partial indexes when the new row satisfies the predicate expression of the partial index. In order to do this, the optimizer synthesizes a column for each partial index defined on the table that evaluates to true if the respective partial index should be written to. These columns are interpretted by the execution engine into a set of IndexIDs to not write to. Release note: None Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 59b7964 - Browse repository at this point
Copy the full SHA 59b7964View commit details -
sqlbase: add TableColSet wrapper for util.FastIntSet of ColumnID
There are numerous places where a `map[sqlbase.ColumnID]struct{}` or a `util.FastIntSet` is used to represent a set of `sqlbase.ColumnID`. This commit adds a typed wrapper around `util.FastIntSet` which is an efficient and ergonomic replacement for maps and `util.FastIntSet`. Release note: None
Configuration menu - View commit details
-
Copy full SHA for be12f0e - Browse repository at this point
Copy the full SHA be12f0eView commit details -
kv: introduce a rate limiter for the range consistency checker
Closes cockroachdb#47290 This commit introduces a new flag server.consistency_check.max_rate to control the rate at which the consistency checker may scan through the range to compute it's checksum. Without a rate limit in place the checker cmay overwhelm the cluster with sufficiently large nodes. For example on a 10B node the checker is expected to produce 120MB/second of disk reads. The flag is defined as bytes/second and set to 8MB/s by default. We expect the customers to continue to use it in conjunction with server.consistency_check.interval, which will givem the ability to control both the frequency and speed of checks. Release note (performance improvement): Introduce a new flag server.consistency_check.max_rate expressed in bytes/second to throttle the rate at which cockroach scans through the disk to perform a consistency check. This control is necessary to ensure smooth performance on a cluster with large node sizes (i.e. in the 10TB+ range)
Configuration menu - View commit details
-
Copy full SHA for 76c63de - Browse repository at this point
Copy the full SHA 76c63deView commit details -
docs: enhance the contributor guide
The contributor guide only contained a URL to the wiki, but was missing searchable keywords. This patch adds that, as well as a link to the community slack. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 64f02af - Browse repository at this point
Copy the full SHA 64f02afView commit details -
Merge cockroachdb#49816 cockroachdb#49837
49816: sql: add support for ANALYZE <tablename> r=rytaft a=rytaft This commit adds support for `ANALYZE <tablename>` by adding the statement as syntactic sugar for the equivalent command `CREATE STATISTICS "" FROM <tablename>`. This improves compatibility with Postgres, and is needed to run the PostGIS tutorial as written. Note that this commit does not add support for `ANALYZE` without a table name. We can add support for that and other variants later if needed, but it is not necessary for the PostGIS tutorial. Fixes cockroachdb#49214 Release note (sql change): Added support for `ANALYZE <tablename>`, which causes the database to collect statistics on the given table for use by the optimizer. The functionality of this command is equivalent to the existing command `CREATE STATISTICS "" FROM <tablename>`, but it increases compatibility with Postgres by using the same syntax that Postgres uses. 49837: docs: enhance the contributor guide r=otan a=knz The contributor guide only contained a URL to the wiki, and was missing searchable keywords. This patch adds that, as well as a link to the community slack. Release note: None Co-authored-by: Rebecca Taft <becca@cockroachlabs.com> Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for d8d001b - Browse repository at this point
Copy the full SHA d8d001bView commit details -
sqlbase: Add system.scheduled_jobs table
Add system.scheduled_job table to system catalog. Add sql migration code to create table when upgrading. See https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20200414_scheduled_jobs.md Release Notes: None
Yevgeniy Miretskiy committedJun 3, 2020 Configuration menu - View commit details
-
Copy full SHA for 2ae39b6 - Browse repository at this point
Copy the full SHA 2ae39b6View commit details -
49690: lease: un-track deprecated use of Gossip r=nvanbenschoten a=tbg The work to remove the dependency has been done, so we're just waiting until we can remove the code. Until then, make sure this use of Gossip does not show up as a prominent caller to DeprecatedGossip any more, for easier bookkeeping. Release note: None Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 505cfdb - Browse repository at this point
Copy the full SHA 505cfdbView commit details -
Configuration menu - View commit details
-
Copy full SHA for e3b281e - Browse repository at this point
Copy the full SHA e3b281eView commit details -
Merge cockroachdb#49763 cockroachdb#49834
49763: kv: introduce a rate limiter for the range consistency checker r=lunevalex a=lunevalex Closes cockroachdb#47290 This commit introduces a new flag server.consistency_check.max_rate to control the rate at which the consistency checker may scan through the range to compute it's checksum. Without a rate limit in place the checker cmay overwhelm the cluster with sufficiently large nodes. For example on a 10B node the checker is expected to produce 120MB/second of disk reads. The flag is defined as bytes/second and set to 8MB/s by default. We expect the customers to continue to use it in conjunction with server.consistency_check.interval, which will givem the ability to control both the frequency and speed of checks. Release note (performance improvement): Introduce a new flag server.consistency_check.max_rate expressed in bytes/second to throttle the rate at which cockroach scans through the disk to perform a consistency check. This control is necessary to ensure smooth performance on a cluster with large node sizes (i.e. in the 10TB+ range) 49834: .github: add geospatial code owners r=rytaft a=otan Release note: None Co-authored-by: Alex Lunev <alexl@cockroachlabs.com> Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for fd1bccf - Browse repository at this point
Copy the full SHA fd1bccfView commit details -
rowcontainer: fix hash row container for some types
The explanation is that `HashDiskRowContainer` is implemented using `DiskRowContainer` with the equality columns (i.e. the columns to hash) of the former being the ordering columns for the latter, and those ordering columns are used to compute the keys of the rows (in `encodeRow`) so that we could store the row in the sorted order. This way we store the build (right) side of the join, but for the probe (left) side we use `hashMemRowIterator` to compute the key of the probing row. The key computation methods must be the same in both places, otherwise, the results of the join can be incorrect. 45229 broke this synchronization by changing the key computation method in `hashMemRowIterator.computeKey` to use `Fingerprint`. So we have to either use `Fingerprint` in `encodeRow` or use `Encode` in `computeKey`. The first choice doesn't seem to work because `Fingerprint` doesn't provide the ordering we need in `DiskRowContainer`, so we need to use the second approach. The ordering property is necessary because `DiskRowContainer` implements "hash row container" by sorting all rows on the ordering (i.e. hash) columns and using the ordering property to provide the "hashing" behavior (i.e. we would seek to the first row that has the same hash columns and then iterate from that row one row at a time forward until the hash columns remain the same). If we don't have the ordering property, then the necessary invariant that all rows that hash to the same value are contiguous is not maintained. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 56244c4 - Browse repository at this point
Copy the full SHA 56244c4View commit details -
errorutil/unimplemented: use redirect server for Github links
This will allow us to capture telemetry such as click counts for each unimplemented error that is returned. Release note (general change): Links that are returned in error messages to point to unimplemented issues now use the CockroachLabs redirect/short-link server.
Configuration menu - View commit details
-
Copy full SHA for e993757 - Browse repository at this point
Copy the full SHA e993757View commit details
Commits on Jun 4, 2020
-
sql: update cascade_opt logictest
Making some updates that were made only to the `cascade` version. Release note: None
Configuration menu - View commit details
-
Copy full SHA for cca1093 - Browse repository at this point
Copy the full SHA cca1093View commit details -
opt: ON UPDATE cascades for Upsert
This change implements ON UPDATE actions for Upsert operations. The existing machinery for Update can be used without modification. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 185c0ab - Browse repository at this point
Copy the full SHA 185c0abView commit details -
colexec: add JSONFetchVal operator for vectorized engine
Previously, the vectorized engine had no support for JSONFetchVal operator. This commit added JSONFetchVal operator. In this commit, I added BytesFamily into compatible canonical type family group of DatumVecCanonicalTypeFamily. Then I declared JSONFetchVal as supported binary operator and registered output type of JSONFetchVal to generate operators. Release note (sql change): Vectorized engine now support JSONFetchVal(->) operator.
Configuration menu - View commit details
-
Copy full SHA for 67d55a0 - Browse repository at this point
Copy the full SHA 67d55a0View commit details -
schemaexpr: use sqlbase.TableColSet instead of maps
This commit replaces maps used as sets of integers with sqlbase.TableColSet because it is a more efficient set implementation. Release note: None
Configuration menu - View commit details
-
Copy full SHA for f61f13e - Browse repository at this point
Copy the full SHA f61f13eView commit details -
geo/geomfn: Implements ST_Segmentize for geometry
Fixes cockroachdb#49029 This PR implements ST_Segmentize({geometry, float8}) builtin function, which allows modify given geometry such that no segment longer than the given max_segment_length. Also this PR refactors and add some extra test cases for ST_Segmentize for geography. Release note (sql change): This PR implements ST_Segmentize({geometry, float8}) builtin function.
Configuration menu - View commit details
-
Copy full SHA for 325b615 - Browse repository at this point
Copy the full SHA 325b615View commit details -
ui: Add Statements page storybook
To test styles isolation for statements page we need storybook which displays entire Statements screen only. To make it work, RouterProvider decorator is added which connects router to dummy (empty) store. `statementsPage.fixture.ts` file contains snapshot of required props for StatementsPage component Release note: None
Configuration menu - View commit details
-
Copy full SHA for cf56656 - Browse repository at this point
Copy the full SHA cf56656View commit details -
ui: CSS modules for PlanView component
- refactor fonts imports to correctly resolve paths when module is required from different locations; - move all files related to PlanView component under `planView` directory - Added story for PlanView component Release note: None
Configuration menu - View commit details
-
Copy full SHA for 83de8d3 - Browse repository at this point
Copy the full SHA 83de8d3View commit details -
ui: Refactor classNames usage with bind for PlanView component
Previously, class names were constructed by simply accessing style modules class names and assigning it to classes. It was cumbersome and not readable at all. To enhance this, `classnames/bind` alternate is used, which allows simply put class names. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 5fff70b - Browse repository at this point
Copy the full SHA 5fff70bView commit details -
ui: CSS modules for SortableTable and StatementsTable
This change refactors components to use CSS modules and incorporate all required styles without any external dependencies and prevent styles altering from outside. It affects several components which tightly coupled with StatementsTable and couldn't be changed separately. Following component are changed: - HighlightedText - Drawer - StatementsTable - SortableTable Note, that `StatementsTable#makeCommonColumns` function is refactored to provide custom styles from parent to child components via props instead of overriding styles. Storybook is extended to show some components as independent units or in context of `StatementTable` component (if it is only the way components work). Release note: None
Configuration menu - View commit details
-
Copy full SHA for af044f0 - Browse repository at this point
Copy the full SHA af044f0View commit details -
vendor: bump golang/protobuf to 1.4.2
v1.4.1 aggressively deprecated something (by inserting panics) that was reachable via gogoproto's marshaler. Luckily, v1.4.2 has this "fixed"; it caused enough trouble for others as well. Closes cockroachdb#49842. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 415d614 - Browse repository at this point
Copy the full SHA 415d614View commit details -
Merge cockroachdb#49748 cockroachdb#49765
49748: ui: Cypress regression visual testing r=koorosh a=koorosh Resolves cockroachdb#49589 Visual regression testing is accomplished by `cypress-image-snapshot` plugin. The main issue solved with this change is how modules are loaded from custom `node_modules` location (./opt/node_modules). - for .ts files `baseUrl` and `typeRoots` are pointing to `./opt/node_modules` to resolve packages and typings. - Webpack configuration is extended with additional module resolver. - Plugin modules (which aren't processed by Webpack) use helper function which appends path to `opt/node_modules` to required module name. Add screenshot testing for main pages in Admin UI. These tests are quite flaky and might fail because nondeterministic state of the server during tests run. Release note: None 49765: sqlbase: Add system.scheduled_jobs table r=miretskiy a=miretskiy Informs cockroachdb#49346 Add system.scheduled_job table to system catalog. Add sql migration code to create table when upgrading. See https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20200414_scheduled_jobs.md Release Notes: None Co-authored-by: Andrii Vorobiov <and.vorobiov@gmail.com> Co-authored-by: Yevgeniy Miretskiy <yevgeniy@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 55c8934 - Browse repository at this point
Copy the full SHA 55c8934View commit details -
server: use three node KV cluster in TestSQLServer
This is more realistic in that it has more potential to tickle unsupported operations. Release note: None
Configuration menu - View commit details
-
Copy full SHA for ca019ba - Browse repository at this point
Copy the full SHA ca019baView commit details -
server: add index backfill to TestSQLServer
This tickled a bug (now fixed) where we were attempting to schedule a non-local flow. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 31489ab - Browse repository at this point
Copy the full SHA 31489abView commit details -
server: use unique fake node ID in StartTenant
We had previously hard-coded a NodeID of 1 (matching the underlying TestServer's NodeID) to make "things work" for SQL tenants. The preceding commits lifted this restriction, so we now use a (static) NodeID which is highly unlikely to match any NodeID from the KV layer. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 16e5049 - Browse repository at this point
Copy the full SHA 16e5049View commit details -
Configuration menu - View commit details
-
Copy full SHA for b0a410b - Browse repository at this point
Copy the full SHA b0a410bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 5ccec3f - Browse repository at this point
Copy the full SHA 5ccec3fView commit details -
opt: add partial index predicates to TableMeta
With this commit, `optbuilder` now adds partial index predicates of a table, as a `map[cat.IndexOrdinal]ScalarExpr`, to `TableMeta` when building SELECT queries. These predicates will be necessary in order to determine if a partial index can be used to satisfy a query. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 795c509 - Browse repository at this point
Copy the full SHA 795c509View commit details -
geo/geotransform: implement ST_Transform
This PR implements ST_Transform, allowing the transformation from one SRID to another. The `geoprojbase` package defines a barebones set of types as well as a hardcoded list of SRIDs to keep in memory. I've only filled in a few for now, and will save updating this for a later PR. `geoproj` is strictly a C library interface library which performs the necessary transformations. `geotransform` is where the function is actually handled and to be used by `geo_builtins.go`. Release note (sql change): Implemented the ST_Transform function for geometries.
Configuration menu - View commit details
-
Copy full SHA for 8b396c8 - Browse repository at this point
Copy the full SHA 8b396c8View commit details -
kvserver: fixup test failure message
Expected and real err were reversed. Release note: None
Configuration menu - View commit details
-
Copy full SHA for c31c4c4 - Browse repository at this point
Copy the full SHA c31c4c4View commit details -
schemachange: unskip TestDropWhileBackfill
Disabling the GC job was preventing this test from completing. Tested with `test stress`: 1000 successful runs. Fixes cockroachdb#44944. Release note: none.
Spas Bojanov committedJun 4, 2020 Configuration menu - View commit details
-
Copy full SHA for 610f23f - Browse repository at this point
Copy the full SHA 610f23fView commit details -
Merge cockroachdb#47513 cockroachdb#49779 cockroachdb#49783 cockroach…
…db#49804 47513: ui: CSS modules for PlanView component r=koorosh a=koorosh Depends on: cockroachdb#47484 Related to: cockroachdb#47527 - Refactored font imports to correctly resolve paths when module is required from different locations; Fonts are imported directly from `app.styl` file which allows import `typography.styl` without dependencies. This change was required because importing `typography.styl` file from CSS modules failed with unresolved paths inside of `fonts.styl` file (which was required in `typography.styl` file). Before, ``` app.styl |-- typography.styl |-- fonts.styl ``` Now: ``` app.styl |-- typography.styl |-- fonts.styl ``` - Move all files related to PlanView component under `planView` directory - Added storybook for `PlanView` component - `planView.module.styl` file contains copy of styles (from `statements.styl`) which is used by component only. Release note: None 49779: opt: incorporate cast volatility r=RaduBerinde a=RaduBerinde This change incorporates the new cast volatility information into the VolatilitySet property. Release note: None 49783: geo/geotransform: implement ST_Transform r=sumeerbhola a=otan This PR implements ST_Transform, allowing the transformation from one SRID to another. The `geoprojbase` package defines a barebones set of types as well as a hardcoded list of SRIDs to keep in memory. I've only filled in a few for now, and will save updating this for a later PR. `geoproj` is strictly a C library interface library which performs the necessary transformations. `geotransform` is where the function is actually handled and to be used by `geo_builtins.go`. Resolves cockroachdb#49055 Resolves cockroachdb#49056 Resolves cockroachdb#49057 Resolves cockroachdb#49058 Release note (sql change): Implemented the ST_Transform function for geometries. 49804: opt: ON UPDATE cascades for Upsert r=RaduBerinde a=RaduBerinde This change implements ON UPDATE actions for Upsert operations. The existing machinery for Update can be used without modification. Release note: None Co-authored-by: Andrii Vorobiov <and.vorobiov@gmail.com> Co-authored-by: Radu Berinde <radu@cockroachlabs.com> Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 82f172a - Browse repository at this point
Copy the full SHA 82f172aView commit details -
Merge cockroachdb#47606 cockroachdb#49770 cockroachdb#49819
47606: ui: CSS modules for Table components r=koorosh a=koorosh Depends on cockroachdb#47513 Related to cockroachdb#47527 This change refactors components to use CSS modules and incorporate all required styles without any external dependencies and prevent styles altering from outside. It affects several components which tightly coupled with StatementsTable and couldn't be changed separately. Following component are changed: - HighlightedText - Drawer - StatementsTable - SortableTable Note, that `StatementsTable#makeCommonColumns` function is refactored to provide custom styles from parent to child components via props instead of overriding styles. Storybook is extended to show some components as independent units or in context of `StatementTable` component (if it is only the way components work). Release note: None 49770: changefeedccl: change default flush interval to 5s r=dt a=dt We observed a customer cluster's changefeeds to cloud storage 'getting stuck' which on further investigation was determined to be happening because they were spending too much time in flushing. This was because they were writing to a cloud sink and the default flush interval of 200ms (poller interval of 1s / 5) meant it spent all of its time flushing. This default was picked testing with lower-latency sinks and was noted in a comment as somewhat arbitrary. This change does two things: it increases the default to the poller interval if unspecified, instead of poller interval / 5, meaning 1s instead of 200ms at the default setting, and if the sink being used is cloud storage, it changes it to the greater of that or 5s. Users who truely desire lower latency can of course specify their own 'resolved' interval, so this change in the default is for those that are indifferent, and increasing the latency to 1s or 5s reduces the chance of hiitting this unfortunate edge case when the sink is too slow. Release note (enterprise change): The default flush interval for changefeeds that do not specify a 'resolved' option is now 1s instead of 200ms, or 5s if the changefeed sink is cloud-storage. 49819: Use faster set for column IDs in schemaexpr r=mgartner a=mgartner #### sqlbase: add ColSet wrapper for util.FastIntSet of ColumnID There are numerous places where a `map[sqlbase.ColumnID]struct{}` or a `util.FastIntSet` is used to represent a set of `sqlbase.ColumnID`. This commit adds a typed wrapper around `util.FastIntSet` which is an efficient and ergonimic replacement for maps and `util.FastIntSet`. Release note: None #### schemaexpr: use sqlbase.ColSet instead of maps This commit replaces maps used as sets of integers with sqlbase.ColSet because it is a more efficient set implementation. Release note: None Co-authored-by: Andrii Vorobiov <and.vorobiov@gmail.com> Co-authored-by: David Taylor <tinystatemachine@gmail.com> Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 994d306 - Browse repository at this point
Copy the full SHA 994d306View commit details -
geo/geomfn: implement Intersection, PointOnSurface, Union
The last of the topology operators up to Chapter 20. Release note (sql change): Implements the ST_Intersection, ST_PointOnSurface and ST_Union builtin functions.
Configuration menu - View commit details
-
Copy full SHA for 6496403 - Browse repository at this point
Copy the full SHA 6496403View commit details -
config: remove outdated TODO for Marc
We don't return nil values from MVCCScan. Also, a bit of harmless cleanup.
Configuration menu - View commit details
-
Copy full SHA for 7be718a - Browse repository at this point
Copy the full SHA 7be718aView commit details -
Mostly cleaning up comments. Noticed while re-familiarizing myself with this package.
Configuration menu - View commit details
-
Copy full SHA for 01856bb - Browse repository at this point
Copy the full SHA 01856bbView commit details -
config: fix GetLargestObjectID for pseudo object IDs
This was not behaving correctly, and was breaking things when a pseudo table had the largest ID below MaxReservedDescID.
Configuration menu - View commit details
-
Copy full SHA for a102b14 - Browse repository at this point
Copy the full SHA a102b14View commit details -
config: properly handle decoding errors in GetLargestObjectID
This was wrong before. Adds testing.
Configuration menu - View commit details
-
Copy full SHA for d8ae78a - Browse repository at this point
Copy the full SHA d8ae78aView commit details -
tree: move tests that pull in server to a different package
This adds the tree/eval_test package, which has dependencies on server and colexec. The tree_test package itself doesn't have to have these heavy deps anymore, which will speed up debugging cycles. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 8619dd1 - Browse repository at this point
Copy the full SHA 8619dd1View commit details -
Merge cockroachdb#49827 cockroachdb#49833 cockroachdb#49869 cockroach…
…db#49870 cockroachdb#49871 49827: geo/geomfn: Implements ST_Segmentize for geometry r=otan a=abhishek20123g Fixes cockroachdb#49029 This PR implements ST_Segmentize({geometry, float8}) builtin function, which allows modify given geometry such that no segment longer than the given max_segment_length. Also this PR refactors and add extra test cases for ST_Segmentize for geography. Release note (sql change): This PR implements ST_Segmentize({geometry, float8}) builtin function. 49833: geo/geomfn: implement Intersection, PointOnSurface, Union r=sumeerbhola a=otan The last of the topology operators up to Chapter 20. Resolves cockroachdb#48951 Resolves cockroachdb#49832 Resolves cockroachdb#49064 Release note (sql change): Implements the ST_Intersection, ST_PointOnSurface and ST_Union builtin functions. 49869: vendor: bump golang/protobuf to 1.4.2 r=knz a=tbg v1.4.1 aggressively deprecated something (by inserting panics) that was reachable via gogoproto's marshaler. Luckily, v1.4.2 has this "fixed"; it caused enough trouble for others as well. Closes cockroachdb#49842. Release note: None 49870: schemachange: unskip TestDropWhileBackfill r=spaskob a=spaskob Disabling the GC job was preventing this test from completing. Tested with `test stress`: 1000 successful runs. Fixes cockroachdb#44944. Release note: none. 49871: kvserver: fixup test failure message r=andreimatei a=andreimatei Expected and real err were reversed. Release note: None Co-authored-by: abhishek20123g <abhishek20123g@gmail.com> Co-authored-by: Oliver Tan <otan@cockroachlabs.com> Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com> Co-authored-by: Spas Bojanov <spas@cockroachlabs.com> Co-authored-by: Andrei Matei <andrei@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 67252bc - Browse repository at this point
Copy the full SHA 67252bcView commit details -
server,roachtest: remove leftover usage of `cockroach quit --decommis…
…sion` Fixes cockroachdb#49635. This is fallout from cockroachdb#49350 that wasn't picked up by CI. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 7e1bec5 - Browse repository at this point
Copy the full SHA 7e1bec5View commit details -
Release note (<category, see below>): <what> <show> <why>
Configuration menu - View commit details
-
Copy full SHA for 89fe4be - Browse repository at this point
Copy the full SHA 89fe4beView commit details -
49836: errorutil/unimplemented: use redirect server for Github links r=knz a=rafiss closes cockroachdb#45504 This will allow us to capture telemetry such as click counts for each unimplemented error that is returned. Release note (general change): Links that are returned in error messages to point to unimplemented issues now use the CockroachLabs redirect/short-link server. Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 588c0a3 - Browse repository at this point
Copy the full SHA 588c0a3View commit details -
config: introduce pseudo "tenants" zone
Fixes cockroachdb#49318. Fixes cockroachdb#49445. Progress towards cockroachdb#48123. Informs cockroachdb#48774. This commit introduces a new pseudo object ID in the system tenant's namespace called "tenants". Like "liveness" and "timeseries" before it, the pseudo object allows zone configurations to be applied to pseudo-objects that do not live directly in the system tenant's SQL keyspace. In this case, the "tenants" zone allows zone configurations to be set by the system tenant and applied to all other tenants in the system. There may come a time when we want secondary tenants to have more control over their zone configurations, but that will require a much larger change to the zone config structure and UX as a whole. While making this change, we rationalize the rest of zone configuration handling and how it relates to multi-tenancy. Now that secondary tenant ranges have a zone config to call their own, we can make sense of calls from KV into the zone configuration infrastructure. We gate off calls that don't make sense for secondary tenants and clean up hooks in SQL that handle zone config manipulation. All of this works towards a good cause - we eliminate the remaining uses of `keys.TODOSQLCodec` from `pkg/sql/...` and `pkg/config/...`, bringing us a big step closer towards being able to remove the placeholder and close cockroachdb#48123. This work also reveals that in order to address cockroachdb#48774, we need to be able to determine split points from the SystemConfig. This makes it very difficult to split on secondary tenant object (e.g. table) boundaries. However, it makes it straightforward to split on secondary tenant keysapce boundaries. This is already what we were thinking (see cockroachdb#47907), so out both convenience and desire, I expect that we'll follow this up with a PR that splits Ranges only at secondary tenant boundaries - placing the overhead of an otherwise empty tenant at only a single Range and a few KBs of data.
Configuration menu - View commit details
-
Copy full SHA for 6953182 - Browse repository at this point
Copy the full SHA 6953182View commit details -
This commit makes git ignore all files in `colexec` folder that end with `.eg.go` which improves the environment setup for the cases when we change the file names as well as simplifies the maintenance. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 0b19f64 - Browse repository at this point
Copy the full SHA 0b19f64View commit details -
49818: colexec: add JSONFetchVal operator for vectorized engine r=yuzefovich a=yongyanglai Previvously, the vectorized engine had no support for JSONFetchVal operator. This commit added JSONFetchVal operator. In this commit, I added BytesFamily into compatible canonical type family group of DatumVecCanonicalTypeFamily. Then I declared JSONFetchVal as supported binary operator and registered output type of JSONFetchVal to generate operators. Fixes cockroachdb#49469 Release note (sql change): Vectorized engine now support JSONFetchVal(->) operator. Co-authored-by: Yongyang Lai <m@yylai.xyz>
Configuration menu - View commit details
-
Copy full SHA for f98432f - Browse repository at this point
Copy the full SHA f98432fView commit details -
sql: enable creation of indexes on tables with user defined types
Work for cockroachdb#48728. This PR teaches the index backfill infrastructure to hydrate types before use. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 2ac28f8 - Browse repository at this point
Copy the full SHA 2ac28f8View commit details -
49860: opt: add partial index predicates to TableMeta r=mgartner a=mgartner With this commit, `optbuilder` now adds partial index predicates of a table, as a `map[cat.IndexOrdinal]ScalarExpr`, to `TableMeta` when building SELECT queries. These predicates will be necessary in order to determine if a partial index can be used to satisfy a query. Release note: None Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for c817d37 - Browse repository at this point
Copy the full SHA c817d37View commit details -
49784: config: introduce pseudo "tenants" zone r=nvanbenschoten a=nvanbenschoten Fixes cockroachdb#49318. Fixes cockroachdb#49445. Progress towards cockroachdb#48123. Informs cockroachdb#48774. This commit introduces a new pseudo object ID in the system tenant's namespace called "tenants". Like "liveness" and "timeseries" before it, the pseudo object allows zone configurations to be applied to pseudo-objects that do not live directly in the system tenant's SQL keyspace. In this case, the "tenants" zone allows zone configurations to be set by the system tenant and applied to all other tenants in the system. There may come a time when we want secondary tenants to have more control over their zone configurations, but that will require a much larger change to the zone config structure and UX as a whole. While making this change, we rationalize the rest of zone configuration handling and how it relates to multi-tenancy. Now that secondary tenant ranges have a zone config to call their own, we can make sense of calls from SQL into the zone configuration infrastructure. We gate off calls that don't make sense for secondary tenants and clean up hooks in SQL that handle zone config manipulation. All of this works towards a good cause - we eliminate the remaining uses of `keys.TODOSQLCodec` from `pkg/sql/...` and `pkg/config/...`, bringing us a big step closer towards being able to remove the placeholder and close cockroachdb#48123. This work also reveals that in order to address cockroachdb#48774, we need to be able to determine split points from the SystemConfig. This makes it very difficult to split on secondary tenant object (e.g. table) boundaries. However, it makes it straightforward to split on secondary tenant keysapce boundaries. This is already what we were thinking (see cockroachdb#47907), so out both convenience and desire, I expect that we'll follow this up with a PR that splits Ranges only at secondary tenant boundaries - placing the overhead of an otherwise empty tenant at only a single Range and a few KBs of data. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 9b50756 - Browse repository at this point
Copy the full SHA 9b50756View commit details
Commits on Jun 5, 2020
-
opt: change JoinMultiplicity from a Relational prop to a join field
Previously, the JoinMultiplicity property was stored as a Relational prop. This is a problem because all expressions in a memo group share the same Relational props during exploration, and there are exploration rules that can flip a join's left/right multiplicity. This patch instead stores JoinMultiplcity as a join field that is initialized during construction of the join. This fixes the shared Relational props issue, and also makes it possible for JoinMultiplicity to aid in calculating other logical properties. Fixes cockroachdb#49821 Release note: None
Configuration menu - View commit details
-
Copy full SHA for ab412e1 - Browse repository at this point
Copy the full SHA ab412e1View commit details -
49855: server,roachtest: remove leftover usage of `cockroach quit --decommission` r=irfansharif a=irfansharif Fixes cockroachdb#49635. This is fallout from cockroachdb#49350 that wasn't picked up by CI. Release note: None Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for c51b249 - Browse repository at this point
Copy the full SHA c51b249View commit details -
It had rotted. Also, add JSON datatype. Release note: None Release justification: test-only code change
Configuration menu - View commit details
-
Copy full SHA for 4575bdb - Browse repository at this point
Copy the full SHA 4575bdbView commit details -
execgen: extract template reading code
Previously, all template generators had to read their template file themselves. Now, this is done by execgen main, opening the door to global transforms that affect all templates in the same way. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 6a5479b - Browse repository at this point
Copy the full SHA 6a5479bView commit details -
49822: opt: change JoinMultiplicity from a Relational prop to a join field r=DrewKimball a=DrewKimball Previously, the JoinMultiplicity property was stored as a Relational prop. This is a problem because all expressions in a memo group share the same Relational props during exploration, and there are exploration rules that can flip a join's left/right multiplicity. This patch instead stores JoinMultiplcity as a join field that is initialized during construction of the join. This fixes the shared Relational props issue, and also makes it possible for JoinMultiplicity to aid in calculating other logical properties. Fixes cockroachdb#49821 Release note: None Co-authored-by: Drew Kimball <drewk@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 6aaf3d5 - Browse repository at this point
Copy the full SHA 6aaf3d5View commit details -
49737: tree: move tests that pull in server to a different package r=jordanlewis a=jordanlewis This adds the tree/eval_test package, which has dependencies on server and colexec. The tree_test package itself doesn't have to have these heavy deps anymore, which will speed up debugging cycles. Release note: None Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 04445a9 - Browse repository at this point
Copy the full SHA 04445a9View commit details -
Merge cockroachdb#49751 cockroachdb#49776
49751: execgen: extract template reading code r=jordanlewis a=jordanlewis Previously, all template generators had to read their template file themselves. Now, this is done by execgen main, opening the door to global transforms that affect all templates in the same way. Release note: None 49776: backupccl: fix bug when backing up dropped tables with revision history r=pbardea a=pbardea When performing an incremental backup with revision history, we want to include all spans that were public at any point during the latest interval under consideration (the time between the last backup and when you are performing the incremental backup). However, consider a table that was dropped before the interval started. The table's descriptor may still be visible (in the DROPPED state). We should not be interested in the spans for this database. So, when going through the list of revisions to table descriptors, we should make sure that the table in question was not DROPPED at some point during this interval. To see why this is needed, consider the following scenario (all backups are assumed to be taken with revision_history): - Create table mydb.a - Create table mydb.b - Drop table mydb.a - Take a backup of mydb (full) - Take an incremental backup (inc) of mydb - Create table mydb.c - Take another incremental backup (inc2) of mydb The backup "inc" and "inc2" should not be considered as backing up table "mydb.a", since it has been dropped at that point. Note that since "inc" does not see any descriptor changes, only "mydb.b" is included in its backup. However, previously, "inc2" would see a table descriptor for "mydb.a" (even though it is dropped) and include it in the set of spans included in "inc2". This is an issue since "inc" did not include this span, and thus there is a gap in the coverage for this dropped table. Fixes cockroachdb#49707. Release note (bug fix): There was a bug where when performing incremental backups with revision history on a database (or full cluster) and a table in the database you were backing up was dropped and then other tables were lated create the backup would return an error. This is now fixed. Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com> Co-authored-by: Paul Bardea <pbardea@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for d5bd854 - Browse repository at this point
Copy the full SHA d5bd854View commit details -
Merge cockroachdb#49697 cockroachdb#49823 cockroachdb#49880
49697: server: use unique fake node ID in StartTenant r=asubiotto a=tbg We had previously hard-coded a NodeID of 1 (matching the underlying TestServer's NodeID) to make "things work" for SQL tenants. The commits in this lifted this restriction, so we now use a (static) NodeID which is highly unlikely to match any NodeID from the KV layer. The main work item was to make sure DistSQL does not schedule any flows. There are many places in the SQL codebase that construct flows and it was difficult to ensure that none of them attempt to schedule a nonlocal one. (We do error out when we attempt to SetupFlow them, though). Release note: None 49823: adding myself to AUTHORS r=DrewKimball a=DrewKimball 49880: colexec: simplify .gitignore r=yuzefovich a=yuzefovich This commit makes git ignore all files in `colexec` folder that end with `.eg.go` which improves the environment setup for the cases when we change the file names as well as simplifies the maintenance. Release note: None Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com> Co-authored-by: Drew Kimball <drewk@cockroachlabs.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for b0c3eca - Browse repository at this point
Copy the full SHA b0c3ecaView commit details -
cliflags: fix the desc for the socket-dir flag
I had the description field mixed up when I previously deprecated `--socket` in favor of `--socket-dir`. Release note (bug fix): The description provided with `--help` on the CLI for `--socket-dir` and `--socket` has been fixed. They were invalid since v20.1.0.
Configuration menu - View commit details
-
Copy full SHA for 2676d75 - Browse repository at this point
Copy the full SHA 2676d75View commit details -
Merge cockroachdb#46573 cockroachdb#49302
46573: workload: fix rand generator r=jordanlewis a=jordanlewis It had rotted. Also, add JSON datatype. Closes cockroachdb#46569. 49302: sql: enable creation of indexes on tables with user defined types r=jordanlewis a=rohany Work for cockroachdb#48728. This PR teaches the index backfill infrastructure to hydrate types before use. Release note: None Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com> Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for d411572 - Browse repository at this point
Copy the full SHA d411572View commit details -
49905: cliflags: fix the desc for the socket-dir flag r=tbg a=knz (found by @tbg) I had the description field mixed up when I previously deprecated `--socket` in favor of `--socket-dir`. Release note (bug fix): The description provided with `--help` on the CLI for `--socket-dir` and `--socket` has been fixed. They were invalid since v20.1.0. Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for 326cb32 - Browse repository at this point
Copy the full SHA 326cb32View commit details -
kv,adminserver: properly remove the no-nodeid semantics
The Decommission RPC doesn't need to accept an empty node ID list as of v20.2 since `cockroach quit --decommission` was removed. However, `TestDecommission` still used that case and started failing as a result. This patch fixes the test to not rely on the behavior. Additionally, this patch causes the RPC to return an error when no node ID is specified, instead of silently turning into a no-op. A discussion remains of whether the RPC should accept a way to specify the "local" node (maybe more explicitly than via an empty list of node IDs), like many of the other RPCs already do. This discussion came up in a separate issues which wants that behavior for the `node drain` command. I am expecting that `node decommission` will want that option too. However let's address that at that time. Release note: None
Configuration menu - View commit details
-
Copy full SHA for e0ad962 - Browse repository at this point
Copy the full SHA e0ad962View commit details -
util/log: clean up and document the handling of stderr
Previous work in this area of the code introduced a confusion between two orthogonal concepts: - each logger might copy its log entries to the process' external stderr stream (e.g. the terminal during interactive use), as set by its "stderrThreshold" variable. - direct writes by Go code to process-wide file descriptor 2 (such as done by the Go runtime) or `os.Stderr` (such as done by 3rd party packages when doing their own ad-hoc logging) can be redirected to a logger's output file. The confusion (mostly mine - @knz) was to mix the two kinds of "stderr" and mistakenly conflating "entries submitted to this logger via API calls" and "direct writes to fd 2 by other go code without looking at the logging API". These are actually completely separate and independent concepts/mechanisms. The code clarifies the situation as follows: - the process' external stderr stream is always available via `log.OrigStderr` and this is meant to never change throughout execution. - the external stderr stream is the sink for `Shout()` API calls and also the copy of log entries whose severity exceed the "stderrThreshold" variable. - separately, *at most one logger* may redirect internal writes to fd 2 and os.Stderr to its log file. This is determined by its variable "noRedirectInternalStderrWrites" (previously named "noRedirectStderr"). Beyond this, this commit fixes 3 bugs. 1. the code was intending to both redirect the standard stderr file descriptor (fd 2 on unix, error handle on windows) and also `os.Stderr` separately, but failing to do so on unix build targets. It was done correctly for windows. This is now corrected so that `os.Stderr` gets assigned on all targets. (The separate assignment of `os.Stderr` is necessary because although Go intializes this to be equivalent to the standard file descriptor upon process startup, other Go code can assign `os.Stderr` after initialization.) 2. upon encountering a write error to its output file, a logger would previously report that write error twice to the process' external stderr. This has been corrected so that the write error is only reported once. 3. previously, upon a log.Fatal performed while configuration would not otherwise cause a copy the F entry go to the process external stderr, the code would override the configuration and force a copy of the entry to the external stderr stream. There is no good reason for this - either the user wants F entries on the external stderr and signal this desire via `--logtostderr=FATAL` or lower, or they don't want F entries there at all via `--logtostderr=NONE`. There is no reason for an override and the code should honor the configuration. (I introduced the bug together with the confusion mentioned at the beginning.) Release note: None
Configuration menu - View commit details
-
Copy full SHA for df3cee6 - Browse repository at this point
Copy the full SHA df3cee6View commit details -
util/log: add an entry header incl timestamp to direct writes
There's an uncommon code path that directly writes to either a log file or the process' external stderr: - to report I/O errors encountered during logging (e.g. errors writing to the output log file) - to copy the details of a panic object reported via `log.ReportPanic` to either external stderr or the log file, when the Go runtime would otherwise only report it to the "other side". In these cases, previously the data would be written as-is to the output sink, without a proper log entry header - timestamp, goroutine, file/line, etc. This made it hard to identify the precise moment when the output was produced and its precise origin. This patch enhances these cases by adding the missing log entry header. Release note: None
Configuration menu - View commit details
-
Copy full SHA for ec7a7a2 - Browse repository at this point
Copy the full SHA ec7a7a2View commit details -
49912: kv,adminserver: properly remove the no-nodeid semantics r=tbg a=knz Fixes cockroachdb#49896 The Decommission RPC doesn't need to accept an empty node ID list as of v20.2 since `cockroach quit --decommission` was removed. However, `TestDecommission` still used that case and started failing as a result. This patch fixes the test to not rely on the behavior. Additionally, this patch causes the RPC to return an error when no node ID is specified, instead of silently turning into a no-op. A discussion remains of whether the RPC should accept a way to specify the "local" node (maybe more explicitly than via an empty list of node IDs), like many of the other RPCs already do. This discussion came up in a separate issue which wants that behavior for the `node drain` command. I am expecting that `node decommission` will want that option too. However let's address that at that time. Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for d25c463 - Browse repository at this point
Copy the full SHA d25c463View commit details -
49849: util/log: clean up and document the handling of stderr r=tbg a=knz Required by cockroachdb#48051 Previous work in this area of the code introduced a confusion between two orthogonal concepts: - each logger might copy its log entries to the process' external stderr stream (e.g. the terminal during interactive use), as set by its "stderrThreshold" variable. - direct writes by Go code to process-wide file descriptor 2 (such as done by the Go runtime) or `os.Stderr` (such as done by 3rd party packages when doing their own ad-hoc logging) can be redirected to a logger's output file. The confusion (mostly mine - @knz) was to mix the two kinds of "stderr" and mistakenly conflating "entries submitted to this logger via API calls" and "direct writes to fd 2 by other go code without looking at the logging API". These are actually completely separate and independent concepts/mechanisms. The code clarifies the situation as follows: - the process' external stderr stream is always available via `log.OrigStderr` and this is meant to never change throughout execution. - the external stderr stream is the sink for `Shout()` API calls and also the copy of log entries whose severity exceed the "stderrThreshold" variable. - separately, *at most one logger* may redirect internal writes to fd 2 and os.Stderr to its log file. This is determined by its variable "noRedirectInternalStderrWrites" (previously named "noRedirectStderr"). Beyond this, this commit fixes 3 bugs. 1. the code was intending to both redirect the standard stderr file descriptor (fd 2 on unix, error handle on windows) and also `os.Stderr` separately, but failing to do so on unix build targets. It was done correctly for windows. This is now corrected so that `os.Stderr` gets assigned on all targets. (The separate assignment of `os.Stderr` is necessary because although Go intializes this to be equivalent to the standard file descriptor upon process startup, other Go code can assign `os.Stderr` after initialization.) 2. upon encountering a write error to its output file, a logger would previously report that write error twice to the process' external stderr. This has been corrected so that the write error is only reported once. 3. previously, upon a log.Fatal performed while configuration would not otherwise cause a copy the F entry go to the process external stderr, the code would override the configuration and force a copy of the entry to the external stderr stream. There is no good reason for this - either the user wants F entries on the external stderr and signal this desire via `--logtostderr=FATAL` or lower, or they don't want F entries there at all via `--logtostderr=NONE`. There is no reason for an override and the code should honor the configuration. (I introduced the bug together with the confusion mentioned at the beginning.) Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for c9b9c01 - Browse repository at this point
Copy the full SHA c9b9c01View commit details -
49844: sql: tag skipped test with issue r=andreimatei a=andreimatei Release note: None Co-authored-by: Andrei Matei <andrei@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 74e186e - Browse repository at this point
Copy the full SHA 74e186eView commit details -
sem: unify division by zero check and fix it in a few places
Release note (bug fix): Previously, in some cases, CockroachDB didn't check whether the right argument of `Div` (`/`), `FloorDiv` (`//`), or `Mod` (`%`) operations was zero, so instead of correctly returning a "division by zero" error, we were returning `NaN`, and this is now fixed. Additionally, the error message of "modulus by zero" has been changed to "division by zero" to be inline with Postgres.
Configuration menu - View commit details
-
Copy full SHA for b973a82 - Browse repository at this point
Copy the full SHA b973a82View commit details -
colexec: add support for bit and some arithmetic binary operators
This commit adds support for `Bitand`, `Bitor`, `Bitxor`, `FloorDiv`, and `Mod` binary operators for both native and datum-backed types. Release note (sql change): Vectorized execution engine now supports `Bitand` (`&`), `Bitor` (`|`), `Bitxor` (`^`), `FloorDiv` (`//`), and `Mod` (`%`) binary operators.
Configuration menu - View commit details
-
Copy full SHA for ca6c664 - Browse repository at this point
Copy the full SHA ca6c664View commit details -
49761: colexec: add support for bit and some arithmetic binary operators r=yuzefovich a=yuzefovich **sem: unify division by zero check and fix it in a few places** Release note (bug fix): Previously, in some cases, CockroachDB didn't check whether the right argument of `Div` (`/`), `FloorDiv` (`//`), or `Mod` (`%`) operations was zero, so instead of correctly returning a "division by zero" error, we were returning `NaN`, and this is now fixed. Additionally, the error message of "modulus by zero" has been changed to "division by zero" to be inline with Postgres. **colexec: add support for bit and some arithmetic binary operators** This commit adds support for `Bitand`, `Bitor`, `Bitxor`, `FloorDiv`, and `Mod` binary operators for both native and datum-backed types. Release note (sql change): Vectorized execution engine now supports `Bitand` (`&`), `Bitor` (`|`), `Bitxor` (`^`), `FloorDiv` (`//`), and `Mod` (`%`) binary operators. Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 432a1c5 - Browse repository at this point
Copy the full SHA 432a1c5View commit details -
roachtest: convert secondary-index-multi-version to version upgrade f…
…ramework Fixes cockroachdb#49670. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 28a136f - Browse repository at this point
Copy the full SHA 28a136fView commit details -
49851: rowcontainer: fix hash row container for some types r=yuzefovich a=yuzefovich The explanation is that `HashDiskRowContainer` is implemented using `DiskRowContainer` with the equality columns (i.e. the columns to hash) of the former being the ordering columns for the latter, and those ordering columns are used to compute the keys of the rows (in `encodeRow`) so that we could store the row in the sorted order. This way we store the build (right) side of the join, but for the probe (left) side we use `hashMemRowIterator` to compute the key of the probing row. The key computation methods must be the same in both places, otherwise, the results of the join can be incorrect. cockroachdb#45229 broke this synchronization by changing the key computation method in `hashMemRowIterator.computeKey` to use `Fingerprint`. So we have to either use `Fingerprint` in `encodeRow` or use `Encode` in `computeKey`. The first choice doesn't seem to work because `Fingerprint` doesn't provide the ordering we need in `DiskRowContainer`, so we need to use the second approach. The ordering property is necessary because `DiskRowContainer` implements "hash row container" by sorting all rows on the ordering (i.e. hash) columns and using the ordering property to provide the "hashing" behavior (i.e. we would seek to the first row that has the same hash columns and then iterate from that row one row at a time forward until the hash columns remain the same). If we don't have the ordering property, then the necessary invariant that all rows that hash to the same value are contiguous is not maintained. Release note: None Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for 7741951 - Browse repository at this point
Copy the full SHA 7741951View commit details -
49878: roachtest: convert secondary-index-multi-version to version upgrade f… r=rohany a=rohany …ramework Fixes cockroachdb#49670. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for dfb6e1b - Browse repository at this point
Copy the full SHA dfb6e1bView commit details
Commits on Jun 6, 2020
-
sql: make
SHOW DATABASES
not need to access schemasFixes cockroachdb#49339. This PR fixes a performance regression in the `SHOW DATABASES` statement. The regression was caused by the table backing the show databases statement performing a lookup of each databases' schemas. Release note (performance improvement): Fix a performance regression in the `SHOW DATABASES` command introduced in 20.1. Release note (sql change): Add the `crdb_internal.databases` virtual table.
Configuration menu - View commit details
-
Copy full SHA for a18b887 - Browse repository at this point
Copy the full SHA a18b887View commit details -
sql: use a separate type for pgcodes
Fixes cockroachdb#49694. This PR uses a separate wrapped type for pgcodes to ensure that arbitrary strings are not passed to the pgerror functions in place of pgcodes. Release note: None
Configuration menu - View commit details
-
Copy full SHA for c4307d9 - Browse repository at this point
Copy the full SHA c4307d9View commit details -
opt: add rules to eliminate a do-nothing join under a Project
Previously, the optimizer couldn't eliminate a join in the input of a Project when the removal would have no effect on the output of the Project operator. This patch adds rules to replace a join with one of its input relations when the following conditions are met: 1. The Project doesn't use any columns from the discarded input. 2. The Join doesn't eliminate or duplicate rows from the preserved input. Fixes cockroachdb#49149 Release note (sql change): The optimizer can now remove an unnecessary join from the input of a Project operator.
Configuration menu - View commit details
-
Copy full SHA for 6bf047d - Browse repository at this point
Copy the full SHA 6bf047dView commit details -
opt: modify limit pushdown rule to support inner joins
Previously, the optimizer could not push a limit into an InnerJoin. This patch replaces PushLimitIntoLeftJoin with two rules which perform the same function as well as handle the InnerJoin case. A limit can be pushed into a given side of an InnerJoin when rows from that side are guaranteed to be preserved by the join. Release note (sql change): improve performance for queries with a limit on a join that is guaranteed to preserve input rows.
Configuration menu - View commit details
-
Copy full SHA for ada1319 - Browse repository at this point
Copy the full SHA ada1319View commit details -
49788: opt: add rules to eliminate a do-nothing join under a Project r=DrewKimball a=DrewKimball Previously, the optimizer couldn't eliminate a join in the input of a Project when the removal would have no effect on the output of the Project operator. This patch adds rules to replace a join with one of its input relations when the following conditions are met: 1. The Project doesn't use any columns from the "discarded" input. 2. The Join doesn't eliminate or duplicate rows from the "preserved" input. Fixes cockroachdb#49149 Release note (sql change): The optimizer can now remove an unnecessary join from the input of a Project operator. Co-authored-by: Drew Kimball <andrewekimball@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 1e836e6 - Browse repository at this point
Copy the full SHA 1e836e6View commit details -
49802: opt: modify limit pushdown rule to support inner joins r=DrewKimball a=DrewKimball Previously, the optimizer could not push a limit into an InnerJoin. This patch replaces PushLimitIntoLeftJoin with two rules which perform the same function as well as handle the InnerJoin case. A limit can be pushed into a given side of an InnerJoin when rows from that side are guaranteed to be preserved by the join. Release note (sql change): improve performance for queries with a limit on a join that is guaranteed to preserve input rows. Co-authored-by: Drew Kimball <drewk@cockroachlabs.com>
Configuration menu - View commit details
-
Copy full SHA for ef4eec0 - Browse repository at this point
Copy the full SHA ef4eec0View commit details -
log: add a mutex for stderr redirection
The `TestLogScope` infrastructure is meant to force-configure logging to go to a temporary directory. It was designed with the assumption it would be used "outside" of server initialization, i.e. non-concurrently with `log` API calls. However, at least one test (`TestHBAAuthenticationRules`) violates the assumption and switches the output directory while a server is running. After checking it appears that the behavior is sound, but there was one racy access remaining: the write to `os.Stderr` during the redirect. This had been racy ever since TestLogScope was introduced; however it was only visible on windows because the unix builds were missing the assignment to `os.Stderr`. A recent fix to make the builds consistent revealed the race to our CI checker. This patch fixes it by adding the missing mutex. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 14a5fe7 - Browse repository at this point
Copy the full SHA 14a5fe7View commit details -
cli/interactive_tests: preserve more artifacts
Only the sub-directory `logs` is mounted inside the docker image run by TestDockerCLI; ensure more stuff is saved there. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 22641d3 - Browse repository at this point
Copy the full SHA 22641d3View commit details -
cli/interactive_tests: make test_sql_mem_monitor more portable
Release note: None
Configuration menu - View commit details
-
Copy full SHA for 5ce39f5 - Browse repository at this point
Copy the full SHA 5ce39f5View commit details -
49936: log: add a mutex for stderr redirection r=tbg a=knz Fixes cockroachdb#49930 The `TestLogScope` infrastructure is meant to force-configure logging to go to a temporary directory. It was designed with the assumption it would be used "outside" of server initialization, i.e. non-concurrently with `log` API calls. However, at least one test (`TestHBAAuthenticationRules`) violates the assumption and switches the output directory while a server is running. After checking it appears that the behavior is sound, but there was one racy access remaining: the write to `os.Stderr` during the redirect. This had been racy ever since TestLogScope was introduced; however it was only visible on windows because the unix builds were missing the assignment to `os.Stderr`. A recent fix to make the builds consistent revealed the race to our CI checker. This patch fixes it by adding the missing mutex. Release note: None Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for f4a9f52 - Browse repository at this point
Copy the full SHA f4a9f52View commit details -
49522: sql: make `SHOW DATABASES` not need to access schemas r=jordanlewis a=rohany This PR fixes a performance regression in the `SHOW DATABASES` statement. The regression was caused by the table backing the show databases statement performing a lookup of each databases' schemas. Release note (performance improvement): Fix a performance regression in the `SHOW DATABASES` command introduced in 20.1. Release note (sql change): Add the `crdb_internal.databases` virtual table. Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for 8082645 - Browse repository at this point
Copy the full SHA 8082645View commit details -
49858: sql: use a separate type for pgcodes r=knz a=rohany Fixes cockroachdb#49694. This PR uses a separate wrapped type for pgcodes to ensure that arbitrary strings are not passed to the pgerror functions in place of pgcodes. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for 57e37b4 - Browse repository at this point
Copy the full SHA 57e37b4View commit details -
49937: cli/interactive_tests: misc improvements r=rohany a=knz Found while working on cockroachdb#48051 Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Configuration menu - View commit details
-
Copy full SHA for 3bf61fd - Browse repository at this point
Copy the full SHA 3bf61fdView commit details
Commits on Jun 7, 2020
-
sql: add tests for use of CTAS with enum types
Work for cockroachdb#48728. This PR adds some tests to verify that different uses of CTAS work as expected when user defined types are in the mix. Release note: None
Configuration menu - View commit details
-
Copy full SHA for d033bd8 - Browse repository at this point
Copy the full SHA d033bd8View commit details -
sql: enable adding check constraints that use enums
Work for cockroachdb#48728. This PR ensures that check constraints that use enums can be added and validated. It also updates the sites where check validations fail to ensure that the displayed error message has been deserialized from the internal representation. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 74270db - Browse repository at this point
Copy the full SHA 74270dbView commit details -
49944: sql: add tests for use of CTAS with enum types r=jordanlewis a=rohany Work for cockroachdb#48728. This PR adds some tests to verify that different uses of CTAS work as expected when user defined types are in the mix. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for 600d317 - Browse repository at this point
Copy the full SHA 600d317View commit details -
49945: sql: enable adding check constraints that use enums r=jordanlewis a=rohany Work for cockroachdb#48728. This PR ensures that check constraints that use enums can be added and validated. It also updates the sites where check validations fail to ensure that the displayed error message has been deserialized from the internal representation. Release note: None Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Configuration menu - View commit details
-
Copy full SHA for 2da8b25 - Browse repository at this point
Copy the full SHA 2da8b25View commit details
Commits on Jun 8, 2020
-
sqlbase: lift shared fields required for leasing to Descriptor
This lifts NameInfo, Version, and Modification time to a new message that lives on the Descriptor rather than inside of the TableDescriptor. It does not yet adopt these fields. Release note: None
Configuration menu - View commit details
-
Copy full SHA for c60221d - Browse repository at this point
Copy the full SHA c60221dView commit details -
sqlbase,catalog: Move the catalog.Descriptor interface to sqlbase
For now, move the Descriptor interface down into sqlbase as part of the effort to elimate sqlbase.DescriptorProto. Release note: None
Configuration menu - View commit details
-
Copy full SHA for b21b245 - Browse repository at this point
Copy the full SHA b21b245View commit details -
sql,sqlbase: remove SetID() from the DescriptorProto interface
In preparation for merging that interface with DescriptorInterface. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 11c44d3 - Browse repository at this point
Copy the full SHA 11c44d3View commit details -
sqlbase,*: replace DescriptorProto with DescriptorInterface
This change is large but is largely mechanical. The basic idea is that we want to stop passing around raw pointers to protobuf structs. Instead we'd like to pass around higher-level wrappers which implement interfaces. There's some room for debate about the precise nature of Mutable/Immutable wrappers for descriptor types but over time, hopefully, we'll move descriptor manipulation behind clear interfaces. It's worth noting that this commit is a half-step. It began, somewhat unfortunately, by introducing the TypeDescriptor wrappers and then goes all the way to deleting the `DescriptorProto` interface and unexporting `WrapDescriptor`. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 97a57a4 - Browse repository at this point
Copy the full SHA 97a57a4View commit details -
sqlbase: introduce DatabaseDescriptorInterface and ImmutableDatabaseD…
…escriptor Next we'll stop having the raw descriptor implement that interface. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 1807be8 - Browse repository at this point
Copy the full SHA 1807be8View commit details -
sqlbase,*: adopt wrapper structs around DatabaseDescriptor
This is a large but also mostly mechanical change. There's room for improvement on the uniformity of it all but let's leave that be for now as I'm getting tired and need to keep making progress. It'll be easier to clean it up later. In particular, I've increasingly been feeling that an interface-based approach to these descriptors would be worthwhile and that that would be easy to accomplish for database but alas. That will have to be another big and mechanical change. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 41f75e9 - Browse repository at this point
Copy the full SHA 41f75e9View commit details -
sqlbase,*: remove direct access to ID and Name fields of TypeDescriptor
This is in anticipation of removing those fields from the proto. This commit for also lifts the "unwrapping" of descriptors into catalogkv and eliminates UnwrapDescriptor. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 3a3fb41 - Browse repository at this point
Copy the full SHA 3a3fb41View commit details -
sqlbase: add sanity to the various TypeDescriptor types
This commit adopts the new fields. TODO(ajwerner): Revisit this commit, I think it gets a lot of things wrong. Release note: None
Configuration menu - View commit details
-
Copy full SHA for a473768 - Browse repository at this point
Copy the full SHA a473768View commit details -
sqlbase: lift methods from TypeDescriptor to ImmutableTypeDescriptor
Release note: None
Configuration menu - View commit details
-
Copy full SHA for 132714c - Browse repository at this point
Copy the full SHA 132714cView commit details -
sqlbase: remove Name and ID from TypeDescriptor
These fields on now on DescriptorMeta. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 9209363 - Browse repository at this point
Copy the full SHA 9209363View commit details -
sqlbase,*: deprecate ID and Name on DatabaseDescriptor
This commit adopts the interface methods for accessing the ID and Name of a DatabaseDescriptor. The reworking in this commit of previous changes also in this PR is annoying, I'm sorry to the reviewers. I'm also increasingly sensing the urgency of eschewing the concrete descriptor wrappers in favor of interfaces but I'm not going to try to deal with that in this PR. It is in many ways more of a mess than what was there but it's laying the foundations. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 2477bee - Browse repository at this point
Copy the full SHA 2477beeView commit details -
sqlbase: add wrapper types for SchemaDescriptor
In anticipation of lifting methods to the wrapper types. Release note: None
Configuration menu - View commit details
-
Copy full SHA for 8b1ae3e - Browse repository at this point
Copy the full SHA 8b1ae3eView commit details -
sqlbase: lift interface methods from SchemaDescriptor
Release note: None
Configuration menu - View commit details
-
Copy full SHA for bc641f9 - Browse repository at this point
Copy the full SHA bc641f9View commit details