From ef23708f10323749e523160e5c5b2a527684ec76 Mon Sep 17 00:00:00 2001 From: Ian Evans Date: Mon, 7 Mar 2022 16:19:46 -0800 Subject: [PATCH] Apply suggestions from Stephanie's review Co-authored-by: Stephanie Bodoff --- _includes/releases/v22.1/v22.1.0-alpha.2.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/_includes/releases/v22.1/v22.1.0-alpha.2.md b/_includes/releases/v22.1/v22.1.0-alpha.2.md index e1b8da63494..762ce83be82 100644 --- a/_includes/releases/v22.1/v22.1.0-alpha.2.md +++ b/_includes/releases/v22.1/v22.1.0-alpha.2.md @@ -13,9 +13,9 @@ Release Date: March 7, 2022

Security updates

- CockroachDB is now able to [authenticate users](../v22.1/security-reference/authentication.html) via the web UI and through SQL sessions when the client provides a cleartext password and the stored credentials are encoded using the `SCRAM-SHA-256` algorithm. Support for a `SCRAM` authentication flow is a separate feature and is not the target of this release note. In particular, for SQL client sessions it makes it possible to use the authentication methods `password` (cleartext passwords), and `cert-password` (TLS client cert or cleartext password) with either `CRDB-BCRYPT` or `SCRAM-SHA-256` stored credentials. Previously, only `CRDB-BCRYPT` stored credentials were supported for cleartext password authentication. [#74301][#74301] -- The hash method used to encode cleartext passwords before storing them is now configurable, via the new [cluster setting](../v22.1/cluster-settings.html) `server.user_login.password_encryption`. Its supported values are `crdb-bcrypt` and `scram-sha-256`. The cluster setting only is enabled after all cluster nodes have been upgraded, at which point its default value is `scram-sha-256`. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is set to `crdb-bcrypt` for backward compatibility. Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via [`CREATE USER WITH PASSWORD`](../v22.1/create-user.html), [`CREATE ROLE WITH PASSWORD`](../v22.1/create-role.html), [`ALTER USER WITH PASSWORD`](../v22.1/alter-user.html), or [`ALTER ROLE WITH PASSWORD`](../v22.1/alter-role.html). This ensures that the server never sees the cleartext password. [#74301][#74301] +- The hash method used to encode cleartext passwords before storing them is now configurable, via the new [cluster setting](../v22.1/cluster-settings.html) `server.user_login.password_encryption`. Its supported values are `crdb-bcrypt` and `scram-sha-256`. The cluster setting is enabled only after all cluster nodes have been upgraded, at which point its default value is `scram-sha-256`. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is set to `crdb-bcrypt` for backward compatibility. Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via [`CREATE USER WITH PASSWORD`](../v22.1/create-user.html), [`CREATE ROLE WITH PASSWORD`](../v22.1/create-role.html), [`ALTER USER WITH PASSWORD`](../v22.1/alter-user.html), or [`ALTER ROLE WITH PASSWORD`](../v22.1/alter-role.html). This ensures that the server never sees the cleartext password. [#74301][#74301] - The cost of the hashing function for `scram-sha-256` is now configurable via the new [cluster setting](../v22.1/cluster-settings.html) `server.user_login.password_hashes.default_cost.scram_sha_256`. Its default value is 119680, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute force passwords via repeated login attempts. Future versions of CockroachDB will likely update this default value. [#74301][#74301] -- When using the default HBA authentication method `cert-password` for SQL client connections, and the SQL client does not present a TLS client certificate to the server, CockroachDB now automatically upgrades the password handshake protocol to use `SCRAM-SHA-256` if the user's stored password uses the `SCRAM` encoding. The previous behavior of requesting a cleartext password is still used if the stored password is encoded using the `CRDB-BCRYPT` format. An operator can force clients to _always_ request `SCRAM-SHA-256` when a TLS client cert is not provided in order to guarantee the security benefits of `SCRAM` using the authentication methods `cert-scram-sha-256` (either TLS client cert _or_ `SCRAM-SHA-256`) and `scram-sha-256` (only `SCRAM-SHA-256`). As in previous releases, mandatory cleartext password authentication can be requested (e.g. for debugging purposes) by using the HBA method `password`. This automatic protocol upgrade can be manually disabled using the new cluster setting `server.user_login.cert_password_method.auto_scram_promotion.enable` and setting it to `false`. Disable automatic protocol upgrades if, for example, certain client drivers are found to not support `SCRAM-SHA-256` authentication properly. [#74301][#74301] +- When using the default HBA authentication method `cert-password` for SQL client connections, and the SQL client does not present a TLS client certificate to the server, CockroachDB now automatically upgrades the password handshake protocol to use `SCRAM-SHA-256` if the user's stored password uses the `SCRAM` encoding. The previous behavior of requesting a cleartext password is still used if the stored password is encoded using the `CRDB-BCRYPT` format. An operator can force clients to **always** request `SCRAM-SHA-256` when a TLS client cert is not provided in order to guarantee the security benefits of `SCRAM` using the authentication methods `cert-scram-sha-256` (either TLS client cert _or_ `SCRAM-SHA-256`) and `scram-sha-256` (only `SCRAM-SHA-256`). As in previous releases, you can request mandatory cleartext password authentication can be requested (e.g. for debugging purposes) by using the HBA method `password`. You can manually disable this automatic protocol upgrade by setting the new cluster setting `server.user_login.cert_password_method.auto_scram_promotion.enable` to `false`. Disable automatic protocol upgrades if, for example, certain client drivers are found to not support `SCRAM-SHA-256` authentication properly. [#74301][#74301] - In order to promote a transition to `SCRAM-SHA-256` for password authentication, CockroachDB now automatically attempts to convert stored password hashes to SCRAM-SHA-256 after a cleartext password authentication succeeds if the target hash method configured via `server.user_login.password_encryption` is `scram-sha-256`. This auto-conversion can happen either during SQL logins or HTTP logins that use passwords, whichever occurs first. When an auto-conversion occurs, a structured event of type `password_hash_converted` is logged to the `SESSIONS` channel. The `PKBDF2` iteration count on the hash is chosen in order to preserve the latency of client logins, to remain similar to the latency incurred from the starting `bcrypt` cost. (For example, the default configuration of `bcrypt` cost 10 is converted to a `SCRAM` iteration count of 119680.) This choice, however, lowers the cost of brute forcing passwords for an attacker with access to the encoded password hashes, if they have access to ASICs or GPUs, by a factor of ~10. For example, if it would previously cost them $1,000,000 to brute force a `crdb-bcrypt` hash, it would now cost them "just" $100,000 to brute force the `scram-sha-256` hash that results from this conversion. If an operator wishes to compensate for this, three options are available: 1. Set up their infrastructure such that only passwords with high entropy can be used. For example, this can be achieved by disabling the ability of end-users to select their own passwords and auto-generating passwords for the user, or enforcing some entropy checks during password selection. This way, the entropy of the password itself compensates for the lower hash complexity. 1. Manually select a higher `SCRAM` iteration count. This can be done either by pre-computing `SCRAM` hashes client-side and providing the pre-computed hash using `ALTER USER WITH PASSWORD`, or adjusting the cluster setting `server.user_login.password_hashes.default_cost.scram_sha_256` and asking CockroachDB to recompute the hash. @@ -24,7 +24,7 @@ Release Date: March 7, 2022 1. The protocol requires that a 64-bit key is used to uniquely identify a session. Some of these bits are used to identify the CockroachDB node that owns the session. The rest of the bits are all random. If the node ID is small enough, then only 12 bits are used for the ID, and the remaining 52 bits are random. Otherwise, 32 bits are used for both the ID and the random secret. 1. A fixed per-node rate limit is used. There can only be at most 256 failed cancellation attempts per second. Any other cancel requests that exceed this rate are ignored. This makes it harder for an attacker to guess random cancellation keys. Specifically, if we assume a 32-bit secret and 256 concurrent sessions on a node, it would take 2^16 seconds (about 18 hours) for an attacker to be certain they have cancelled a query. 1. No response is returned for a cancel request. This makes it impossible for an attacker to know if their guesses are working. Unsuccessful attempts are [logged internally](../v22.1/logging-use-cases.html#security-and-audit-monitoring) with warnings. Large numbers of these messages could indicate malicious activity. [#67501][#67501] -- The cluster setting `server.user_login.session_revival_token.enabled` has been added. It is `false` by default. If set to `true`, then a new token-based authentication mechanism is enabled. A token can be generated using the `crdb_internal.create_session_revival_token` built in [function](../v22.1/functions-and-operators.html). The token has a lifetime of 10 minutes and is cryptographically signed to prevent spoofing and brute forcing attempts. When initializing a session later, the token can be presented in a `pgwire` `StartupMessage` with a parameter name of `crdb:session_revival_token_base64`, with the value encoded in `base64`. If this parameter is present, all other authentication checks are disabled, and if the token is valid and has a valid signature, the user who originally generated the token authenticates into a new SQL session. If the token is not valid, then authentication fails. The token does not have use-once semantics, so the same token can be used any number of times to create multiple new SQL sessions within the 10 minute lifetime of the token. As such, the token should be treated as highly sensitive cryptographic information. This feature is meant to be used by multi-tenant deployments to move a SQL session from one node to another. It requires the presence of a valid `Ed25519` keypair in `tenant-signing..crt` and `tenant-signing..key`. [#75660][#75660] +- The cluster setting `server.user_login.session_revival_token.enabled` has been added. It is `false` by default. If set to `true`, then a new token-based authentication mechanism is enabled. A token can be generated using the `crdb_internal.create_session_revival_token` built-in [function](../v22.1/functions-and-operators.html). The token has a lifetime of 10 minutes and is cryptographically signed to prevent spoofing and brute forcing attempts. When initializing a session later, the token can be presented in a `pgwire` `StartupMessage` with a parameter name of `crdb:session_revival_token_base64`, with the value encoded in `base64`. If this parameter is present, all other authentication checks are disabled, and if the token is valid and has a valid signature, the user who originally generated the token authenticates into a new SQL session. If the token is not valid, then authentication fails. The token does not have use-once semantics, so the same token can be used any number of times to create multiple new SQL sessions within the 10 minute lifetime of the token. As such, the token should be treated as highly sensitive cryptographic information. This feature is meant to be used by multi-tenant deployments to move a SQL session from one node to another. It requires the presence of a valid `Ed25519` keypair in `tenant-signing..crt` and `tenant-signing..key`. [#75660][#75660] - When the `sql.telemetry.query_sampling.enabled` cluster setting is enabled, SQL names and client IPs are no longer redacted in telemetry logs. [#76676][#76676]

General changes

@@ -39,7 +39,7 @@ Release Date: March 7, 2022

Enterprise edition changes

-- Client certificates may now be provided for the `webhook` [changefeed sink](../v22.1/changefeed-sinks.html). [#74645][#74645] +- Client certificates can now be provided for the `webhook` [changefeed sink](../v22.1/changefeed-sinks.html). [#74645][#74645] - CockroachDB now redacts more potentially sensitive URI elements from changefeed job descriptions. This is a breaking change for workflows that copy URIs. As an alternative, the unredacted URI may be accessed from the jobs table directly. [#75174][#75174] - Changefeeds now outputs the topic names created by the Kafka sink. Furthermore, these topic names will be displayed in the [`SHOW CHANGEFEED JOBS`](../v22.1/show-jobs.html#show-changefeed-jobs) query. [#75223][#75223] - [Backup and restore](../v22.1/take-full-and-incremental-backups.html) jobs now allow encryption/decryption with GCS KMS [#75750][#75750] @@ -61,7 +61,7 @@ Release Date: March 7, 2022 ~~~ [#76583][#76583] -- Users may now alter the sink URI of an existing changefeed. This can be achieved by executing `ALTER CHANGEFEED SET sink = ''` where the sink type of the new sink must match the sink type of the old sink that was chosen at the creation of the changefeed. [#77043][#77043] +- Users can now alter the sink URI of an existing changefeed. This can be achieved by executing `ALTER CHANGEFEED SET sink = ''` where the sink type of the new sink must match the sink type of the old sink that was chosen at the creation of the changefeed. [#77043][#77043]

SQL language changes

@@ -99,7 +99,7 @@ Release Date: March 7, 2022 - Transaction ID to transaction fingerprint ID mapping is now stored in the new transaction ID cache, a FIFO unordered in-memory buffer. The size of the buffer is 64 MB by default and configurable via `sql.contention.txn_id_cache.max_size` [cluster setting](../v22.1/cluster-settings.html). Consequentially, two additional metrics are introduced: - `sql.contention.txn_id_cache.size`: the current memory usage of transaction ID cache. - `sql.contention.txn_id_cache.discarded_count`: the number of resolved transaction IDs that are dropped due to memory constraints. [#74115][#74115] -- Added new [builtin functions](../v22.1/functions-and-operators.html#built-in-functions) called `crdb_internal.revalidate_unique_constraint`, `crdb_internal.revalidate_unique_constraints_in_table`, and `crdb_internal.revalidate_unique_constraints_in_all_tables`, which can be used to revalidate existing unique constraints. The different variations support validation of a single constraint, validation of all unique constraints in a table, and validation of all unique constraints in all tables in the current database, respectively. If any constraint fails validation, the functions will return an error with a hint about which data caused the constraint violation. These violations can then be resolved manually by updating or deleting the rows in violation. This will be useful to users who think they may have been affected by issue [#73024](https://github.com/cockroachdb/cockroach/issues/73024). [#75548][#75548] +- Added new [built-in functions](../v22.1/functions-and-operators.html#built-in-functions) called `crdb_internal.revalidate_unique_constraint`, `crdb_internal.revalidate_unique_constraints_in_table`, and `crdb_internal.revalidate_unique_constraints_in_all_tables`, which can be used to revalidate existing unique constraints. The different variations support validation of a single constraint, validation of all unique constraints in a table, and validation of all unique constraints in all tables in the current database, respectively. If any constraint fails validation, the functions will return an error with a hint about which data caused the constraint violation. These violations can then be resolved manually by updating or deleting the rows in violation. This will be useful to users who think they may have been affected by issue [#73024](https://github.com/cockroachdb/cockroach/issues/73024). [#75548][#75548] - The [`SHOW GRANTS ON SCHEMA`](../v22.1/show-grants.html) statement now includes the `is_grantable` column [#75722][#75722] - CockroachDB now disallows [type casts](../v22.1/scalar-expressions.html#explicit-type-coercions) from [`ENUM`](../v22.1/enum.html) to [`BYTES`](../v22.1/bytes.html). [#75816][#75816] - [`EXPORT PARQUET`](../v22.1/export.html) has a new `compression` option whose value can be `gzip` or `snappy`. An example query: @@ -197,7 +197,7 @@ Release Date: March 7, 2022 - CockroachDB now turns on support for hash-sharded indexes in implicit partitioned tables. Previously, CockroachDB blocked users from creating hash-sharded indexes in all kinds of partitioned tables including implicit partitioned tables using `PARTITION ALL BY` or `REGIONAL BY ROW`. Primary keys cannot be hash-sharded if a table is explicitly partitioned with `PARTITION BY` or an index cannot be hash-sharded if the index is explicitly partitioned with `PARTITION BY`. Partitioning columns cannot be placed explicitly as key columns of a hash-sharded index, including regional-by-row table's `crdb_region` column. [#76358][#76358] - When a hash-sharded index is partitioned, ranges are now pre-split within every single possible partition on shard boundaries. Each partition is split up to 16 ranges, otherwise split into the number bucket count ranges. Note that, only the list partition is being pre-split. CockroachDB doesn't pre-split range partitions. [#76358][#76358] - New user privileges were added: `VIEWCLUSTERSETTING` and `NOVIEWCLUSTERSETTING` that controls whether users can view cluster settings only. [#76012][#76012] -- Several error cases in geospatial and other builtin functions now return more appropriate error codes. [#76458][#76458] +- Several error cases in geospatial and other built-in functions now return more appropriate error codes. [#76458][#76458] - [Expression indexes](../v22.1/expression-indexes.html) can no longer have duplicate expressions. [#76863][#76863] - The `crdb_internal.serialize_session` and `crdb_internal.deserialize_session` functions now handle prepared statements. When deserializing, any prepared statements that existed when the session was serialized are re-prepared. Re-preparing a statement if the current session already has a statement with that name throws an error. [#76399][#76399] - The `experimental_enable_hash_sharded_indexes` session variable was removed, along with the corresponding cluster setting. The ability to create hash-sharded indexes is enabled automatically. SQL statements that refer to the setting will still work but will have no effect. [#76937][#76937] @@ -353,7 +353,7 @@ Release Date: March 7, 2022 - Fixed a bug where some of the [`cockroach node`](../v22.1/cockroach-node.html) subcommands did not handle `--timeout` properly. [#76427][#76427] - Fixed a bug which caused the [optimizer](../v22.1/cost-based-optimizer.html) to omit join filters in rare cases when reordering joins, which could result in incorrect query results. This bug was present since v20.2. [#76334][#76334] - Fixed a bug where the list of recently decommissioned nodes and the historical list of decommissioned nodes incorrectly display decommissioned nodes. [#76538][#76538] -- Fixed a bug where CockroachDB could incorrectly not return a row from a table with multiple column families when that row contains a `NULL` value when a composite type ([`FLOAT](../v22.1/float.html)`, [`DECIMAL`](../v22.1/decimal.html), [`COLLATED STRING`](../v22.1/collate.html), or an array of these types) is included in the `PRIMARY KEY`. [#76563][#76563] +- Fixed a bug where CockroachDB could incorrectly not return a row from a table with multiple column families when that row contains a `NULL` value when a composite type ([`FLOAT`](../v22.1/float.html), [`DECIMAL`](../v22.1/decimal.html), [`COLLATED STRING`](../v22.1/collate.html), or an array of these types) is included in the `PRIMARY KEY`. [#76563][#76563] - There is now a 1 hour timeout when sending [Raft](../v22.1/architecture/replication-layer.html#raft) snapshots, to avoid stalled snapshot transfers preventing Raft log truncation and growing the Raft log very large. This is configurable via the `COCKROACH_RAFT_SEND_SNAPSHOT_TIMEOUT` environment variable. [#76589][#76589] - Fixed an error that could sometimes occur when sorting the output of the [`SHOW CREATE ALL TABLES`](../v22.1/show-create.html) statement. [#76639][#76639] - Fixed a bug where [backups](../v22.1/take-full-and-incremental-backups.html) incorrectly backed up database, schema, and type descriptors that were in a `DROP` state at the time the backup was run. This bug resulted in the user being unable to backup and restore if their cluster had dropped and public descriptors with colliding names. [#76635][#76635]