-
Notifications
You must be signed in to change notification settings - Fork 419
issue with custom data types between schemas on same connection #72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes, My best guess for the fix would be to have connection object drop the plan cache on every |
Searching >>> await conn.execute("CREATE TABLE IF NOT EXISTS abc(i int)")
>>> await conn.fetch("SELECT * FROM abc")
>>> await conn.execute("ALTER TABLE abc ALTER COLUMN i TYPE bigint")
>>> await conn.fetch("SELECT * FROM abc")
Note that DDL can be done in a stored procedure or even via another connection (e.g. by a migration script). The easiest way is to intercept the exception, recreate prepared statement and run it again. Unfortunately, it does not work in transactions because a single error ruins the whole transaction at once (or up to the last savepoint[1]). In my project at work I just limited all other code from interacting inside transactions (a wrapper has all queries to be run in a transaction before starts it), and when such exception raises (or connection is lost), the wrapper drops that prepared statement (or waits a new connection is established) and runs those queries in another transaction. My best guess would be:
P.S.: of course there is an edge case which can lead to an infinite loop: >>> await conn.execute(
... "CREATE TABLE a(i int);"
... "PREPARE p AS SELECT * FROM a;"
... "EXECUTE p;"
... "ALTER TABLE a ALTER COLUMN i TYPE bigint;"
... "EXECUTE p;"
... ) or another one: >>> await conn.execute("DROP TABLE IF EXISTS abc")
>>> await conn.execute("CREATE TABLE IF NOT EXISTS abc(i int)")
>>> await conn.fetch("PREPARE p AS SELECT * FROM abc")
>>> await conn.fetch("EXECUTE p;")
>>> await conn.execute("ALTER TABLE abc ALTER COLUMN i TYPE bigint")
>>> await conn.fetch("EXECUTE p;") Nothing changes if you recreate a prepared statement for the last command (prepared statement for prepared statement) or even do not cache it at all. The reason is in user's statement, not in [1]https://www.postgresql.org/docs/current/static/sql-savepoint.html |
Actually, the documentation states the following:
Which makes this error a bit odd, as prepared statement submission should survive both the schema change and DDL. I'll need to look into this more. |
So, on a bit of further investigation it appears that this is a limitation of Postgres' current plan cache implementation [1]. The prepared plan is replanned, but the query must return the tuple of the same type, which is what the error message is saying. I'm now inclined to leave the current |
It survives until result type of all "columns" (which type information has been already sent to a client side) are the same. >>> await conn.execute("CREATE TABLE IF NOT EXISTS abc(i int)")
>>> await conn.fetch("SELECT * FROM abc")
>>> await conn.execute("ALTER TABLE abc ALTER COLUMN i TYPE bigint")
>>> await conn.execute("ALTER TABLE abc ALTER COLUMN i TYPE int") # change it back
>>> await conn.fetch("SELECT * FROM abc")
>>> print("OK") # check it is OK
|
Whether users will have to explicitly wrap |
Probably a "prepare threshold" setting on a connection, like JDBC does. Setting it to 0 would disable automatic plan caching altogether. Arguably, this issue is an edge case. |
-1. APIs like this are hard to use and usually lead to error prone code. Two solutions for OP's problem:
|
Debatable. According to POLA,
I agree it is true for changing current schema name between queries. |
Alternatively, we can reset the cache and auto-retry if we catch this very exception |
I wrote about possible ways and corner- and edge-cases to be covered.
As the library becomes more popular, more users will impact this issue. The primary reason why the current report appears is not in Postgres, it is because Happily I impacted this issue long time ago (and not in production) and in my current project I implemented workaround in a wrapper (it does many other things besides discussed one). But since I have that experience (and a wrapper for it), I am not a person who would be an author of a similar report here. But many people do not have such experience and/or use libraries which handles that issue (@elprans mentioned JDBC does something). I think sooner or later it will be fixed in |
Yes. I meant the same thing. See jdbc docs on Setting We probably need to put a note in the docs mentioning that for connections that fiddle with DDL and/or |
Right, we understand the actual issue here. Speaking about possible solutions you proposed:
The behaviour inside and outside transactions should be the same. [1]
We think that you're right and the problem needs to be addressed. More on that below.
This would be super inefficient, we cannot do this, so [1] can't be implemented either.
This is the solution for the OP problem. The solution we arrived at is:
|
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
PostgreSQL will raise an exception when it detects that the result type of the query has changed from when the statement was prepared. This may happen, for example, after an ALTER TABLE or SET search_path. When this happens, and there is no transaction running, we can simply re-prepare the statement and try again. If the transaction _is_ running, this error will put it into an error state, and we have no choice but to raise an exception. The original error is somewhat cryptic, so we raise a custom InvalidCachedStatementError with the original server exception as context. In either case we clear the statement cache for this connection and all other connections of the pool this connection belongs to (if any). See #72 and #76 for discussion. Fixes: #72.
(if locally, which version of Cython was used)?: pypi pip pkg
uvloop?: Yes
I'm receiving the exception
"asyncpg.exceptions.FeatureNotSupportedError: cached plan must not change result type"
when I try to use a custom enum type across 2 schemas in the same connection. This happens even when I'm using independent transactions. This behavior doesn't happen when I issue the SQL in postgres directly.
I've written a sample python script along with some SQL to reproduce the issue.
https://gist.github.com/defg/8fd9e97efd1ba5195052a0faa1457dcc
Maybe something is cached w/o regard for the schema?
I'm working around this issue for the time being by making a connection per schema, but it would be much easier in the future if I could access different schemas on the same connection w/o error.
The text was updated successfully, but these errors were encountered: