Closed
Description
- asyncpg version: 0.18.3
- PostgreSQL version: 9.6.8
- Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?: AWS RDS Aurora PostgreSQL, can reproduce it locally - Python version: 3.6.8
- Platform: Linux
- Do you use pgbouncer?: No
- Did you install asyncpg with pip?: Yes
- If you built asyncpg locally, which version of Cython did you use?:
- Can the issue be reproduced under both asyncio and
uvloop?: uvloop
I recently got bit by what's described at the end of this issue: #103 (review)
I did a schema migration of a table while the service was running, and a few errors like this appeared (once for each process):
results = await connection.fetch(query)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 421, in fetch
return await self._execute(query, args, 0, timeout)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 1414, in _execute
query, args, limit, timeout, return_status=return_status)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 1422, in __execute
return await self._do_execute(query, executor, timeout)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 1444, in _do_execute
result = await executor(stmt, None)
File "asyncpg/protocol/protocol.pyx", line 196, in bind_execute
asyncpg.exceptions.InvalidCachedStatementError: cached statement plan is invalid due to a database schema or configuration change
The problem is that the query executed was inside a transaction, so there was no way for asyncpg to recover from that state (and why the exception was raised).
Is changing the parameter statement_cache_size=0
the only way to avoid this? Do you know if there is a lot of performance penalty when disabling this cache?
Metadata
Metadata
Assignees
Labels
No labels