-
Notifications
You must be signed in to change notification settings - Fork 419
Error when trying to set JSONB as a custom type codec #140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You need to declare your custom codec as |
Hey @elprans, thanks for your help. Unfortunately I tried with
Not sure if I'm doing something wrong. |
Looks like asyncpg prefers builtin binary-format codec over custom text-format. I'll look into fixing this. In the meantime, you can go back to import json
conn = await self.cluster.connect(database='postgres', loop=self.loop)
try:
def _encoder(value):
return b'\x01' + json.dumps(value).encode('utf-8')
def _decoder(value):
return json.loads(value[1:].decode('utf-8'))
await conn.set_type_codec(
'jsonb', encoder=_encoder, decoder=_decoder,
schema='pg_catalog', binary=True
)
data = {'foo': 'bar', 'spam': 1}
res = await conn.fetchval('SELECT $1::jsonb', data)
self.assertEqual(data, res)
finally:
await conn.close() |
Is this last version backwards-compatible with already existing data in the database? I'm not sure if this magic byte is consumed somewhere in the asyncpg client or actually stored in the database. |
The magic byte is part of the format for JSONB type in the database. It is physically stored in the database and is consumed by asyncpg when reading and writing to jsonb columns |
b'\x01' is a version of a binary protocol[1], it is saved nowhere, just checked at the very beginning of a parse process. |
@vitaly-burovoy That's correct, but the OP's motivation is to enable structured encoding/decoding. |
For anyone else trying this with Postgresql 10 and asyncpg 0.13.0, the suggested workaround code suggested above in this Issue must be updated from:
to
Looks like the documentation is behind, but I found reference to the "format" keyword in a commit diff so I tried that and it worked. |
Unless you really need to do binary, stick to the default |
As someone who is new to PostgreSQL, trying different drivers, and testing PostgreSQL against my current MongoDB back-end - do you have any recommended best practices for using asyncpg for a mixed RDBMS and Document Store setup? The application stores REST API response data in JSONB columns with some key fields pulled out into simple data type columns for more direct SQL activities. Several tables with smaller JSON objects will have 50+ million rows. I've not been able to find many real-world examples using asyncpg and I'd like to give the driver and database a good test, but without practical examples I worry that the testing won't be fair. If you have any recommendations or can point me to projects using asyncpg in a recommended fashion it would be very much appreciated. Thank you for your consideration, and for your efforts supporting the Python community. |
I recommend reading PostgreSQL documentation: https://www.postgresql.org/docs/current/static/datatype-json.html, especially the "Designing JSON documents effectively" and "jsonb Indexing" bits. There isn't anything special you need to do with asyncpg to work with json types. Beyond optionally setting a codec that encodes/decodes JSON data automatically that is. |
It seems like using this setting but the |
uvloop?:
I followed the steps to have a custom type codec for JSON as described here https://magicstack.github.io/asyncpg/current/usage.html#custom-type-conversions
It works with tables created with
JSON
type, but fails forJSONB
type.I created a new test locally (similar to
test_custom_codec_override
but usingjsonb
instead ofjson
type):This is the error returned:
The text was updated successfully, but these errors were encountered: