-
Notifications
You must be signed in to change notification settings - Fork 419
asyncpg.exceptions.InvalidSQLStatementNameError: prepared statement "__asyncpg_stmt_37__" does not exist #121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Setting |
Start by printing |
Also, you're not using threads in any way, are you? |
Yes, I do. That was my first attempt to workaround issue and without any luck.
No, I don't. At least explicitly, but don't recall if aiohttp used them as well.
Thanks for the suggestion! Will try this today. |
Ideally, we'd like to see the code on which this can be reproduced. |
I understand. It would be quite complicated however since to make sure that error occurs there need to wait for several hours. As I said, everything works fine since very start till some random moment. I'll try to setup some simple toy script in hopes to reproduce the problem, that just could take some time. I have added prints you asked for. Also I add the one for statement when query execution fails. Btw, I noticed that sometimes error becomes different:
We do have cache disabled now, but we observe both named and unnamed prepared statements miss. Here are the prints log. I just realized that there would be nice to see timestamps, but that's for a second iterations if you'll be interested in that part of information:
|
I noticed that for some ids, statement was cleared before it get executed again. That's probably the part of the problem we're investigating, right? |
I don't actually see any states cleared before use in your log. They do get cleared after. Timestamps would be useful, yes. |
Hm...true. I misread few ids. Ok, waiting for the next round. This will take some time. |
Also, please indicate in the log the first occurrence of the error. |
Ok. But first occurrence of |
Duplicates: #149 |
Still can't imagine a scenario how either of those could have fixed something that could cause a bug like that. @kxepal would it be possible for you to test asyncpg with one of those commits reverted to see if a problem reoccurs? We really need to understand what happened here. |
Actually, I'm confused: I don't understand origins of this bug. It was in our production (though we had dirty-patched it), upgrade to 0.13 get rid it completely so far - it happens almost instantly. But now I managed to reproduce it with 0.13.
Run it as simple Python script. It fails like:
But! I can't reproduce it in 0.13 release with the following script:
While it triggers the bug in 0.11 and 0.12 releases and for commit 57c9ffd, but not for 50edd8c. P.S. FYI 0.13 release also fixed issue with false positive
Run it with py.test. And it fails like:
Suddenly, I'm failed to reproduce it with something simpler. |
@elprans @1st1
This is definitely not how it should be fixed, but it eventually helps to avoid bug in all the asyncpg versions. Hope it helps. |
Hm, there's a bug in your repro script.
When doing the right thing ( |
I can't reproduce with your |
@elprans
It took about three-five minutes for me to trigger it for v0.13 and almost 5-10 seconds for the rest ones.
I tend to think so as well. I'd updates the top post about pgbounces presence - we actually has it. There is a bit problem with whole setup since I don't control PostgreSQL service - it provided as SaaS. Do you know what questions I could ask service admins to gain useful information for debug process? However, there are no any issues with our setup with aiopg or psycopg2.
Yes. And it takes a few moments to catch that issue. On FreeBSD it takes awhile.
Definitelly not reproduced. |
FWIW, I could not reproduce this in a FreeBSD 10.4-STABLE VM with either script. (stock python3.6 and postgresql96-server). |
Which SaaS are you using? Are you able to provide access to a test database instance? If so, please send the details to elvis@magic.io |
Also, if there is indeed pgbouncer involved, you cannot use the statement cache at all, since pgbouncer does not support prepared statements. Disable it by passing |
It's not public, but internal service. It's administrated by different team and provided for the rest company as SaaS. Sorry, I should be more specific on this SaaS term. From docs it should be like this one:
Providing access much likely will be not possible or be extremely complicated task.
We followed that advice from #76 issue, but it has limited effect on this issue so I threw it away. I rerun first script with disabled cache and it failed with different error now:
|
Thanks! I modified create pool in following way:
But seems it doesn't helps no matter if it discard or deallocate. That was naive attempt in hope that it may help. I also noticed that sometimes it takes dozen minutes to catch the failure while sometimes, rarely, it fails instantly for few times. What could provocate such behaviour? Server-side misconfiguration? |
Right, the I stand firm in my opinion that pgbouncer in "transaction" or "statement" I will add a relevant advice to the docs and the FAQ. |
The |
I'm not aware of a clean way to handle this, unfortunately. You should continue to roll your hack for now. Also, you probably need to patch |
I would also open a pgbouncer bug. Basically, pgbouncer must make a session "sticky" as soon as it sees a Prepare message (or anything else that is session-specific). |
We actually should do this regardless of this bug. I'll make a PR. |
Incidentally: pgbouncer/pgbouncer#242 |
We changed |
Great! Closing. |
noyesuvloop?: yes
Hi!
We notice a problem that issue name describes pretty well. How did we get it?
We have a small aiohttp service which runs a static query to PostgreSQL database on every handled request. Like
SELECT foo, bar FROM table ORDER BY baz
. No parameters, nothing all. That's the only query we executre to database. We're using connection pool sized from 1 to 4 connections as well, the rest parameters are default (except connection one of course).Everything works fine until eventually we starting get the InvalidSQLStatementNameError exception on
connection.fetch
call. It doesn't happens instantly, it occurs after few hours of service uptime. Sometimes every.fetch
call ends with that exception. Sometimes every even only. The number of__asyncpg_stmt_
token is always different, but stable until restart when error occurs.We tried to disable statements cache at all, set it to zero. This didn't happened. We thought that 0.10 release will fix that problem. Suddenly, it doesn't. We played around the cache size, connection pool size, connection life time, but all ended with the same issue. Difference may be in dozen of minutes, but all ends similar.
It's quite hard for me to provide any example to reproduce the problem because everything just works for a quite long time. Except that particular service, which isn't quite different, doesn't uses any hacks, globals, pretty trivial. Just no idea what to try next.
The last bits of the traceback are here:
Any ideas how to debug this issue? Or workaround it? Would be grateful for any help.
The text was updated successfully, but these errors were encountered: