-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}) #6583
Comments
I think it is a better idea to only accept the syntax without supporting the functionality. While this would need a lot of highlighting in the docs, many current RDBMS support this explicit locking mechanism (except MSSQL), so accepting it would relieve a lot of pain in modifying the apps. In an open source library like quartz, accepting this without actually supporting means, possible errors at the commit time (due to OCC) which it doesn't handle. |
Note that We shouldn't support the syntax without providing the semantics that it implies. Fortunately, the locking modifiers are mostly semantically relevant at lower isolation levels; at Even though it's not required for This looks like the code in quartz scheduler that uses It's using a |
On a related note, I got a question about https://blogs.oracle.com/rammenon/entry/select_for_update_nowait_skip has some nice diagrams. There's also |
|
Any ETA on supporting this SQL syntax? We have a project using openjpa and it generates this SQL using the postgresql driver. |
TPCC also uses |
@jordanlewis I'm guessing that is an optimization, not a requirement given our default serializable isolation. |
You're right. |
SELECT FOR UPDATE also appears to be used by Liquibase and consequently also prevents us from using Cockroach with Keycloak. Is there any ETA on supporting this? I'd be willing to try to contribute if the consensus is that the syntax should be supported without the intended semantics, as was suggested in the issue description. |
I don't think we want to support |
That makes sense @bdarnell. Do you guys have plans to work on this anytime soon? I wouldn't mind trying to contribute, although I'd probably need a few pointers to get started because actually implementing the semantics sounds rather involved. |
We don't currently have any plans to work on this, although we should probably get it on the roadmap (I'm marking it as 1.3 for now because the 1.2 release cycle is already pretty full, although this would be a good candidate for 1.2 if we find some slack in the schedule. cc @nstewart ) Implementing this would touch a lot of different parts of the code. No part is individually too tricky, but there are a lot of them ( |
Adding new API calls is a bit tedious. Could we add an |
I thought about that but this seems like too big of a distinction to make based on a flag in the request instead of a whole new command. Imagine debugging command-queue problems when a |
PostgreSQL (https://www.postgresql.org/docs/current/static/sql-select.html) supports four different levels for the locking clause:
Do we want to support all four or only |
|
I agree that Should I at least support the syntax for |
At least for serializable transactions, |
Sounds good. |
In addition to the 4 different "locking strengths", Postgres also supports locking on specific tables as well as different options for waiting on locks. The full syntax is:
where Do we want to support multiple tables with different options? From the above comments it sounds like we probably want to support Another question is how precisely the locked rows should match the query results. Postgres generally locks only the rows returned by the query, but there are a few side cases when extra rows may be locked (e.g., with |
I would start with the basic all-or-nothing version. We may want to do the per-table version eventually (if there is demand), but my guess is that the feature is rarely used enough that it's not worth the complexity at this time.
It's fine with me if the exact set of rows locked differs from postgres in some edge cases. This feature is less crucial for us because we have stronger transaction isolation anyway; I think the only times it will really matter will be the fairly simple cases. |
All sounds good to me. But per Spencer's request, I've also created an RFC to summarize the discussions in this issue and discuss an alternative approach involving locking of ranges: PR #19577. |
Ah, well not strictly no in that It is just that in the high contention case were we note that It is likely that this only matters if the application has conditional logic along the lines of catching OptimisticLockException and continue processing maybe skipping some work it would otherwise do if OptimisticLockException wasn't thrown. The conditional logic based on catching OptimisticLockException that used to work all the time at READ_COMMITTED will now occasionally fail at SERIALIZABLE when under high contention.
Well maybe I am misunderstanding your comment there but at READ_COMMITTED I'd say we need and use FOR UPDATE (pessimistic locking) for other reasons. For example, blocking exclusive ownership when processing database migrations where 1 instance obtains the lock (and processes the migrations) other instances must WAIT/Block (until the migrations are complete). For transactions that have conditional logic at SERIALIZABLE we could change to use FOR UPDATE (catch and handle the failure to obtain the lock and continue processing). |
Ah, I think this is what we've been missing. You're talking about a sequence like this, right?
So you're retrying within a single transaction and relying on the fact that reads are not repeatable. This will only work in READ COMMITTED (and isn't even guaranteed to work there - the standard allows unrepeatable reads but doesn't require them) and will fail in either REPEATABLE READ or SERIALIZABLE. Instead of just looping back to the select statement when you get rowCount==0, you need to roll back the transaction and retry everything in a new transaction. Or you can avoid transactions completely, and do the select and the update completely independently of each other (the version check makes this safe without any transactions). Either of these approaches will work regardless of the isolation level.
Yes, at READ COMMITTED, FOR UPDATE is required. But CockroachDB does not support READ COMMITTED and has no plans to.
This is the most common use case for SELECT FOR UPDATE that we see, and it doesn't work in CockroachDB. These migration tools will have to be modified to do locking in a different way (such as by actually writing to the database instead of just holding a lock in a transient transaction).
Adding FOR UPDATE to the select in this version column pattern would avoid the need to restart the transaction, but it also means it's not optimistic any more - you'd be holding a write lock across the whole operation. |
Not really'ish. The select of data being updated can happen anytime before in a prior transaction (and normally would) and this isn't about retry per say. It is more like:
Where work A and work B are other updates, deletes etc. I should just put up the test in a git repo - I'll look to do that at some point but want to sort out some docker stuff first. Note that if there is no "do work A" then we don't really care if the transaction fails (we don't care if the exception thrown is recoverable or not recoverable). It is only when we have other updates/deletes in the transaction that we care that the exception is recoverable (transaction can continue). It maybe helps if I say that if instead we have a single update like:
Then for me is a matter of treating With Postgres or Oracle at READ_COMMITTED we always get the OptimisticLockException. With CockroachDB we will sometimes now get the TransactionRetryWithProtoRefreshError ... but this doesn't matter if the application treats both exceptions in the same way - "Sorry, concurrent update, please refresh your data and reapply your changes". So for myself, I'm looking to map TransactionRetryWithProtoRefreshError to be a special type of OptimisticLockException [maybe
Yes for the "loop and retry" scenario.
Yes. It is the "Other clients need to wait (are blocking)" part that is going to be the not so pretty aspect to deal with here. I should be doing this shortly so I'll know for sure soon enough.
Yes agreed. Apologies for consuming a lot of your time. This has been good for me, I'm getting pretty comfortable with some of the things I need to do. As a more random thought. Instead of SELECT FOR UPDATE ... Cockroach could provide a completely different command like: Cheers, Rob. |
In Cockroachdb there is no select for update (see cockroachdb/cockroach#6583) But it is not needed, make it a no-op.
The blacklist was missing some expected failures and also incorrectly marked some tests as expected failure. Some of them are a result of Hibernate adding new tests to their repository. Closing cockroachdb#6583 revealed a few new compatibility issues, so many of the failing tests now link to a new issue. Release note: None
40479: roachtest: update hibernate blacklist for 19.1 and 19.2 r=rafiss a=rafiss The blacklist was missing some expected failures and also incorrectly marked some tests as expected failure. Some of them are a result of Hibernate adding new tests to their repository. Closing #6583 revealed a few new compatibility issues, so many of the failing tests now link to a new issue. closes #38288 Release note: None Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com>
The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep up to date. If we write down blacklists in code, then we can use an approach like this to always have an up to date aggregation. So far it seems like there's just a lot of unknowns to categorize still. The output today: ``` === RUN TestBlacklists 648: unknown (unknown) 493: cockroachdb#5807 (sql: Add support for TEMP tables) 151: cockroachdb#17511 (sql: support stored procedures) 86: cockroachdb#26097 (sql: make TIMETZ more pg-compatible) 56: cockroachdb#10735 (sql: support SQL savepoints) 55: cockroachdb#32552 (multi-dim arrays) 55: cockroachdb#26508 (sql: restricted DDL / DML inside transactions) 52: cockroachdb#32565 (sql: support optional TIME precision) 39: cockroachdb#243 (roadmap: Blob storage) 33: cockroachdb#26725 (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea)) 31: cockroachdb#27793 (sql: support custom/user-defined base scalar (primitive) types) 24: cockroachdb#12123 (sql: Can't drop and replace a table within a transaction) 24: cockroachdb#26443 (sql: support user-defined schemas between database and table) 20: cockroachdb#21286 (sql: Add support for geometric types) 18: cockroachdb#6583 (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait})) 17: cockroachdb#22329 (Support XA distributed transactions in CockroachDB) 16: cockroachdb#24062 (sql: 32 bit SERIAL type) 16: cockroachdb#30352 (roadmap:when CockroachDB will support cursor?) 12: cockroachdb#27791 (sql: support RANGE types) 8: cockroachdb#40195 (pgwire: multiple active result sets (portals) not supported) 8: cockroachdb#6130 (sql: add support for key watches with notifications of changes) 5: Expected Failure (unknown) 5: cockroachdb#23468 (sql: support sql arrays of JSONB) 5: cockroachdb#40854 (sql: set application_name from connection string) 4: cockroachdb#35879 (sql: `default_transaction_read_only` should also accept 'on' and 'off') 4: cockroachdb#32610 (sql: can't insert self reference) 4: cockroachdb#40205 (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE) 4: cockroachdb#35897 (sql: unknown function: pg_terminate_backend()) 4: cockroachdb#4035 (sql/pgwire: missing support for row count limits in pgwire) 3: cockroachdb#27796 (sql: support user-defined DOMAIN types) 3: cockroachdb#3781 (sql: Add Data Type Formatting Functions) 3: cockroachdb#40476 (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`) 3: cockroachdb#35882 (sql: support other character sets) 2: cockroachdb#10028 (sql: Support view queries with star expansions) 2: cockroachdb#35807 (sql: INTERVAL output doesn't match PG) 2: cockroachdb#35902 (sql: large object support) 2: cockroachdb#40474 (sql: support `SELECT ... FOR UPDATE OF` syntax) 1: cockroachdb#18846 (sql: Support CIDR column type) 1: cockroachdb#9682 (sql: implement computed indexes) 1: cockroachdb#31632 (sql: FK options (deferrable, etc)) 1: cockroachdb#24897 (sql: CREATE OR REPLACE VIEW) 1: pass? (unknown) 1: cockroachdb#36215 (sql: enable setting standard_conforming_strings to off) 1: cockroachdb#32562 (sql: support SET LOCAL and txn-scoped session variable changes) 1: cockroachdb#36116 (sql: psychopg: investigate how `'infinity'::timestamp` is presented) 1: cockroachdb#26732 (sql: support the binary operator: <int> / <float>) 1: cockroachdb#23299 (sql: support coercing string literals to arrays) 1: cockroachdb#36115 (sql: psychopg: investigate if datetimetz is being returned instead of datetime) 1: cockroachdb#26925 (sql: make the CockroachDB integer types more compatible with postgres) 1: cockroachdb#21085 (sql: WITH RECURSIVE (recursive common table expressions)) 1: cockroachdb#36179 (sql: implicity convert date to timestamp) 1: cockroachdb#36118 (sql: Cannot parse '24:00' as type time) 1: cockroachdb#31708 (sql: support current_time) ``` Release justification: non-production change Release note: None
The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep up to date. If we write down blacklists in code, then we can use an approach like this to always have an up to date aggregation. So far it seems like there's just a lot of unknowns to categorize still. The output today: ``` === RUN TestBlacklists 648: unknown (unknown) 493: cockroachdb#5807 (sql: Add support for TEMP tables) 151: cockroachdb#17511 (sql: support stored procedures) 86: cockroachdb#26097 (sql: make TIMETZ more pg-compatible) 56: cockroachdb#10735 (sql: support SQL savepoints) 55: cockroachdb#32552 (multi-dim arrays) 55: cockroachdb#26508 (sql: restricted DDL / DML inside transactions) 52: cockroachdb#32565 (sql: support optional TIME precision) 39: cockroachdb#243 (roadmap: Blob storage) 33: cockroachdb#26725 (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea)) 31: cockroachdb#27793 (sql: support custom/user-defined base scalar (primitive) types) 24: cockroachdb#12123 (sql: Can't drop and replace a table within a transaction) 24: cockroachdb#26443 (sql: support user-defined schemas between database and table) 20: cockroachdb#21286 (sql: Add support for geometric types) 18: cockroachdb#6583 (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait})) 17: cockroachdb#22329 (Support XA distributed transactions in CockroachDB) 16: cockroachdb#24062 (sql: 32 bit SERIAL type) 16: cockroachdb#30352 (roadmap:when CockroachDB will support cursor?) 12: cockroachdb#27791 (sql: support RANGE types) 8: cockroachdb#40195 (pgwire: multiple active result sets (portals) not supported) 8: cockroachdb#6130 (sql: add support for key watches with notifications of changes) 5: Expected Failure (unknown) 5: cockroachdb#23468 (sql: support sql arrays of JSONB) 5: cockroachdb#40854 (sql: set application_name from connection string) 4: cockroachdb#35879 (sql: `default_transaction_read_only` should also accept 'on' and 'off') 4: cockroachdb#32610 (sql: can't insert self reference) 4: cockroachdb#40205 (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE) 4: cockroachdb#35897 (sql: unknown function: pg_terminate_backend()) 4: cockroachdb#4035 (sql/pgwire: missing support for row count limits in pgwire) 3: cockroachdb#27796 (sql: support user-defined DOMAIN types) 3: cockroachdb#3781 (sql: Add Data Type Formatting Functions) 3: cockroachdb#40476 (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`) 3: cockroachdb#35882 (sql: support other character sets) 2: cockroachdb#10028 (sql: Support view queries with star expansions) 2: cockroachdb#35807 (sql: INTERVAL output doesn't match PG) 2: cockroachdb#35902 (sql: large object support) 2: cockroachdb#40474 (sql: support `SELECT ... FOR UPDATE OF` syntax) 1: cockroachdb#18846 (sql: Support CIDR column type) 1: cockroachdb#9682 (sql: implement computed indexes) 1: cockroachdb#31632 (sql: FK options (deferrable, etc)) 1: cockroachdb#24897 (sql: CREATE OR REPLACE VIEW) 1: pass? (unknown) 1: cockroachdb#36215 (sql: enable setting standard_conforming_strings to off) 1: cockroachdb#32562 (sql: support SET LOCAL and txn-scoped session variable changes) 1: cockroachdb#36116 (sql: psychopg: investigate how `'infinity'::timestamp` is presented) 1: cockroachdb#26732 (sql: support the binary operator: <int> / <float>) 1: cockroachdb#23299 (sql: support coercing string literals to arrays) 1: cockroachdb#36115 (sql: psychopg: investigate if datetimetz is being returned instead of datetime) 1: cockroachdb#26925 (sql: make the CockroachDB integer types more compatible with postgres) 1: cockroachdb#21085 (sql: WITH RECURSIVE (recursive common table expressions)) 1: cockroachdb#36179 (sql: implicity convert date to timestamp) 1: cockroachdb#36118 (sql: Cannot parse '24:00' as type time) 1: cockroachdb#31708 (sql: support current_time) ``` Release justification: non-production change Release note: None
41252: roachtest: add test that aggregates orm blacklist failures r=jordanlewis a=jordanlewis The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep up to date. If we write down blacklists in code, then we can use an approach like this to always have an up to date aggregation. So far it seems like there's just a lot of unknowns to categorize still. The output today: ``` === RUN TestBlacklists 648: unknown (unknown) 493: #5807 (sql: Add support for TEMP tables) 151: #17511 (sql: support stored procedures) 86: #26097 (sql: make TIMETZ more pg-compatible) 56: #10735 (sql: support SQL savepoints) 55: #32552 (multi-dim arrays) 55: #26508 (sql: restricted DDL / DML inside transactions) 52: #32565 (sql: support optional TIME precision) 39: #243 (roadmap: Blob storage) 33: #26725 (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea)) 31: #27793 (sql: support custom/user-defined base scalar (primitive) types) 24: #12123 (sql: Can't drop and replace a table within a transaction) 24: #26443 (sql: support user-defined schemas between database and table) 20: #21286 (sql: Add support for geometric types) 18: #6583 (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait})) 17: #22329 (Support XA distributed transactions in CockroachDB) 16: #24062 (sql: 32 bit SERIAL type) 16: #30352 (roadmap:when CockroachDB will support cursor?) 12: #27791 (sql: support RANGE types) 8: #40195 (pgwire: multiple active result sets (portals) not supported) 8: #6130 (sql: add support for key watches with notifications of changes) 5: Expected Failure (unknown) 5: #23468 (sql: support sql arrays of JSONB) 5: #40854 (sql: set application_name from connection string) 4: #35879 (sql: `default_transaction_read_only` should also accept 'on' and 'off') 4: #32610 (sql: can't insert self reference) 4: #40205 (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE) 4: #35897 (sql: unknown function: pg_terminate_backend()) 4: #4035 (sql/pgwire: missing support for row count limits in pgwire) 3: #27796 (sql: support user-defined DOMAIN types) 3: #3781 (sql: Add Data Type Formatting Functions) 3: #40476 (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`) 3: #35882 (sql: support other character sets) 2: #10028 (sql: Support view queries with star expansions) 2: #35807 (sql: INTERVAL output doesn't match PG) 2: #35902 (sql: large object support) 2: #40474 (sql: support `SELECT ... FOR UPDATE OF` syntax) 1: #18846 (sql: Support CIDR column type) 1: #9682 (sql: implement computed indexes) 1: #31632 (sql: FK options (deferrable, etc)) 1: #24897 (sql: CREATE OR REPLACE VIEW) 1: pass? (unknown) 1: #36215 (sql: enable setting standard_conforming_strings to off) 1: #32562 (sql: support SET LOCAL and txn-scoped session variable changes) 1: #36116 (sql: psychopg: investigate how `'infinity'::timestamp` is presented) 1: #26732 (sql: support the binary operator: <int> / <float>) 1: #23299 (sql: support coercing string literals to arrays) 1: #36115 (sql: psychopg: investigate if datetimetz is being returned instead of datetime) 1: #26925 (sql: make the CockroachDB integer types more compatible with postgres) 1: #21085 (sql: WITH RECURSIVE (recursive common table expressions)) 1: #36179 (sql: implicity convert date to timestamp) 1: #36118 (sql: Cannot parse '24:00' as type time) 1: #31708 (sql: support current_time) ``` Release justification: non-production change Release note: None Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com>
…totype SFU Informs cockroachdb#41720. This commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Any reader of this commit should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented and understood in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed after intent resolution as soon as possible. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, in order to reduce tail latencies under contended scenarios. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. WARNING: this is still a WIP. Notably, the lockTableImpl is mocked out with a working but incomplete implementation. See cockroachdb#43740 for a more complete strawman. Release note: None
…totype SFU Informs cockroachdb#41720. This commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Any reader of this commit should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented and understood in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed after intent resolution as soon as possible. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, in order to reduce tail latencies under contended scenarios. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. WARNING: this is still a WIP. Notably, the lockTableImpl is mocked out with a working but incomplete implementation. See cockroachdb#43740 for a more complete strawman. Release note: None
Informs cockroachdb#41720. The commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Interested readers should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented, understood, and tested in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed immediately after intent resolution. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, to reduce tail latencies under contended sceneries. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big differences are that it updates the lockTable interface to match the one introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest of the concurrency control mechanisms in CockroachDB. At this point, a reader of this file should get a good idea of how KV transactions interact in CRDB... now we just need to make the system actually work this way.
Informs cockroachdb#41720. The commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Interested readers should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented, understood, and tested in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed immediately after intent resolution. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, to reduce tail latencies under contended sceneries. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big differences are that it updates the lockTable interface to match the one introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest of the concurrency control mechanisms in CockroachDB. At this point, a reader of this file should get a good idea of how KV transactions interact in CRDB... now we just need to make the system actually work this way.
Informs cockroachdb#41720. The commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Interested readers should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented, understood, and tested in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed immediately after intent resolution. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, to reduce tail latencies under contended sceneries. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big differences are that it updates the lockTable interface to match the one introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest of the concurrency control mechanisms in CockroachDB. At this point, a reader of this file should get a good idea of how KV transactions interact in CRDB... now we just need to make the system actually work this way.
Informs cockroachdb#41720. The commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Interested readers should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented, understood, and tested in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed immediately after intent resolution. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, to reduce tail latencies under contended sceneries. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. This commit pulls over a lot of already reviewed code from cockroachdb#43775. The big differences are that it updates the lockTable interface to match the one introduced in cockroachdb#43740 and it addresses the remaining TODOs to document the rest of the concurrency control mechanisms in CockroachDB. At this point, a reader of this file should get a good idea of how KV transactions interact in CRDB... now we just need to make the system actually work this way.
44787: storage/concurrency: define concurrency control interfaces r=nvanbenschoten a=nvanbenschoten Informs #41720. The commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Interested readers should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented, understood, and tested in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed immediately after intent resolution. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (#6583). 4. provide stronger guarantees around fairness when transactions conflict, to reduce tail latencies under contended sceneries. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in #41720. This commit pulls over a lot of already reviewed code from #43775. The big differences are that it updates the lockTable interface to match the one introduced in #43740 and it addresses the remaining TODOs to document the rest of the concurrency control mechanisms in CockroachDB. At this point, a reader of this file should get a good idea of how KV transactions interact in CRDB... now we just need to make the system actually work this way. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
…totype SFU Informs cockroachdb#41720. This commit creates a new concurrency package that provides a concurrency manager structure that encapsulates the details of concurrency control and contention handling for serializable key-value transactions. Any reader of this commit should start at `concurrency_control.go` and move out from there. The new package has a few primary objectives: 1. centralize the handling of request synchronization and transaction contention handling in a single location, allowing for the topic to be documented and understood in isolation. 2. rework contention handling to react to intent state transitions directly. This simplifies the transaction queueing story, reduces the frequency of transaction push RPCs, and allows waiters to proceed after intent resolution as soon as possible. 3. create a framework that naturally permits "update" locking, which is required for kv-level SELECT FOR UPDATE support (cockroachdb#6583). 4. provide stronger guarantees around fairness when transactions conflict, in order to reduce tail latencies under contended scenarios. 5. create a structure that can extend to address the long-term goals of a fully centralized lock-table laid out in cockroachdb#41720. WARNING: this is still a WIP. Notably, the lockTableImpl is mocked out with a working but incomplete implementation. See cockroachdb#43740 for a more complete strawman. Release note: None
Postgres (and MySQL as well) provide a
FOR UPDATE
clause which instructs a SELECT operation to acquire row-level locks for the data it is reading.We currently don't support this syntax, and supporting it with the same semantics is more or less impossible anyway with our underlying concurrency model (we have no locks, and while we could put "intents" in place to simulate an intent to write, it wouldn't quite be the same).
We should discuss whether we want to add least accept the syntax so that frameworks which make use of it can at least proceed (even if they don't get the intended semantics, though it's not clear what the implications of that are).
Apparently the Quartz scheduler makes use of
SELECT FOR UPDATE
. It is in this context that this came up.See also http://www.postgresql.org/docs/9.1/static/explicit-locking.html for the explicit locking modes exposed by Postgres.
The text was updated successfully, but these errors were encountered: