-
Notifications
You must be signed in to change notification settings - Fork 861
JSON values have storage limit of 256kb, will throw error 'Value for field <field name> is too long' if value size limit exceeds. #2705
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 10 days if no further activity occurs. Thank you for your contributions. |
Has anyone found a workaround for this? This is a show stopper for my app that stores JSON objects of about 350kb. It seems this isn't a postgres limit but something hard coded by Prisma. Is the only way to fork Prisma and hack that line linked to above? Or is it more complicated and Prisma is also limiting the db field size when creating the db schema? |
I would also be interested in having the limits for String and JSON fields made configurable. Our app has to save large documents and we're having to use ugly workarounds at the moment. |
Any update regarding this? We're planning on launching later this month and are blocked by this. Thanks! |
Can you just gzip the value into a string? JSON often compresses pretty
well.
You could also make your field an array of strings and chunk your data.
…On Mon, May 13, 2019, 2:28 PM Gordon Mickel ***@***.***> wrote:
Any update regarding this? We're planning on launching later this month
and are blocked by this. Thanks!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<prisma/prisma#2705?email_source=notifications&email_token=ABLDSJZUSBTA7XVYNOWZMGTPVHMODA5CNFSM4FHMRP72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVJTUHY#issuecomment-491993631>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABLDSJ5OI656PKG55ZPVB5LPVHMODANCNFSM4FHMRP7Q>
.
|
I used gzipping as the workaround I mentioned earlier in this thread. Naturally, that only delays the inevitable, our system deals with documents that can be up to around 1-2MB in size and due to their complexity (programmable documents) and the real-time nature of our platform, I can't easily split them into smaller pieces. |
My assumption is that it was a lowest common denominator issue to harmonize
the various DBs.
…On Mon, May 13, 2019, 2:46 PM Gordon Mickel ***@***.***> wrote:
I used gzipping as the workaround I mentioned earlier in this thread.
Naturally, that only delays the inevitable, our system deals with documents
that can be up to around 1MB in size and due to their complexity
(programmable documents) and the real-time nature of our platform, I can't
easily split them into smaller pieces.
I'm wondering if the current limits on the String and JSON fields are
arbitrary or due to a specific constraint?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<prisma/prisma#2705?email_source=notifications&email_token=ABLDSJ55NUIRCM5KYDJWE6LPVHOSNA5CNFSM4FHMRP72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVJU6FQ#issuecomment-491998998>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABLDSJ4IWMPHGLFWP2HZZLTPVHOSNANCNFSM4FHMRP7Q>
.
|
That's definitely possible, yes. In that case, configuration flags (as mentioned here in the docs, but don't seem to exist) might be a viable solution. |
Any suggestion on Json type field length? |
This is a high priority problem for us. We have a production app that depends on some 300-500kb data. We have a very small 500KB-5MB data, and now our server is throwing errors when we work on that table/data. It's not specified anywhere why or who is deciding Json values are currently limited to 256KB in size., Even mongodb had 16MB limit for a document, MySQL/PostgreSQL has 4GB limit for data. Why limit the data to 256KB in this age of data? :( I don't understand. The docs links to which seems to be closed/stalled for many months/years. The following issues are related yet no solution/workaround provided:
All these issues are for 256KB limit on String, because just going with String is not a choice as well,
On that note, here is my current workaround, until I find a better workaround 🗡️ I could handle around 50MB of data as well based on my computer/server CPU/RAM support. It seems the akka used inside scala has this limit by default. The trick is to create a [String] array, and a Json type. So when requesting from client, we will get/return the single json data. But when storing it, we will split the data into chunks and save it to many little chunks. I'm pretty sure there will be some server limitations, because updating/creating such a big array is going to take time. When requesting it, we will convert the data to json before returning. Concept CodeHere is the rough code, use with caution, // index.js
const { prisma } = require("./generated/prisma-client");
const { GraphQLServer } = require("graphql-yoga");
const fastChunkString = require("fast-chunk-string");
const dataToArray = (data)=> {
const newData = typeof data === 'string' ? data : JSON.stringify(data);
return fastChunkString(newData, {size: 2048 })
// 2048 in kilobits => roughly 256 kilobytes
}
const dataToJson = (result)=> {
result.single = JSON.parse(result.data.join(''))
return result;
};
const resolvers = {
Query: {
async output(root, args, context) {
const result = await context.prisma.output({ id: args.id });
return dataToJson(result);
}
},
Mutation: {
async createOutput(root, args, context) {
const result = await context.prisma.createOutput({
data: { set: dataToArray(args.single) }
});
return dataToJson(result);
},
async updateOutput(root, args, context) {
const result = await context.prisma.updateOutput({
where: {id: args.id},
data: { ...args.data, data: {set: dataToArray(args.single) }}
});
return dataToJson(result);
}
}
};
const server = new GraphQLServer({
typeDefs: "./schema.graphql",
resolvers,
context: {
prisma
}
});
server.start({bodyParserOptions: { limit: "100mb", type: "application/json" },}, () => console.log("Server is running on http://localhost:4000")); # schema.graphql
# import * from './generated/prisma.graphql'
type Query {
output(id: ID!): Output
}
type Mutation {
createOutput(name: String! single: Json): Output
updateOutput(id: ID! single: Json): Output
}
type Output {
id: ID!
single: Json
} # datamodel.prisma
type Output {
id: ID! @id
data: [String] @scalarList(strategy: RELATION)
single: Json
} // testing code with fetch
var longData = '#'.repeat(50000000);
fetch("http://localhost:4000/", {
credentials: "include",
headers: {
accept: "*/*",
"accept-language": "en-US,en;q=0.9,bn;q=0.8",
"content-type": "application/json",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin"
},
referrer: "http://localhost:4000/",
referrerPolicy: "no-referrer-when-downgrade",
body:
`{"operationName":"createOutput","variables":{},"query":"mutation createOutput {\\n createOutput(name: \\"test\\", single: \\"{\\\\\\"Hello\\\\\\":\\\\\\"${longData}\\\\\\"}\\"){\\n id\\n single\\n }\\n}"}`,
method: "POST",
mode: "cors"
})
.then(data=>data.json())
.then(data=> {
console.log({
length: longData.length,
same: data.data.createOutput.single.Hello === longData
})
}) Result:It was using a freaking amount of RAM for a 50MB request though. The document had only around 200 documents of 256KB Data. And just opening the admin panel used around 5-10GB of ram lol. Are you saying prisma cannot handle 200 documents with 256KB data each? On my machine,
This is such a weird workaround but it worked and that tells me this issue is on hold for 2 years for no reason. 😅 any suggestions are welcome |
They put a 256KB limit so I created a 50MB The only another issue I see for that hardcoded value on the validation file is the garbage collection. But it still felt really arbitrary/randomly chosen, because maybe they thought "if we don't put a limit here, people might abuse our service, or the server might not be able to handle it" etc. |
I just ran into this issue myself trying to store ~300KB JSON Objects and thought I'd share my quick work around here incase folks found it helpful. As recommended above I ended up changing the data type for the attribute I was storing from // compressionUtils.js
import * as zlib from 'zlib'
const base64 = 'base64'
export const deflateData = data =>
zlib.brotliCompressSync(data).toString(base64)
export const inflateData = data =>
zlib.brotliDecompressSync(Buffer.from(data, base64)).toString() We're using Obviously you will lose the ability to query the |
@kunovsky I did test with zlib before splitting it, zlib was working fine as well, but it would not allow me to store binary or serious data (non-compressible string). So I had to forcefully split and join it. I will suggest others to try zlib just like your example if the data is compressable. :) |
Json
type fields have storage limitation of 256kb. There might be applications that requireLONGTEXT
length support to storeJSON
values.it would be great if we could remove this limitation at application level and let db handle limits (not sure if this can be desired for all scalar types, but for this case Json value can go longer than 16MB)
The text was updated successfully, but these errors were encountered: