MYSQL Vitess Error 2027 HY000 when inserting large blob data #644
-
Unable to run The same CODE, SQL and DATA run normally on any large blob data, for example: // nodejs 20
// sequelize 6.37.3
// mysql2 3.10.0
const payload: any = {
documentName: "debug:test-large-blob",
state: (() => {
// Create a 40MB buffer
const bufferSize = 40 * 1024 * 1024; // 40MB in bytes
const largeBuffer = Buffer.alloc(bufferSize);
// Fill the buffer with some data (optional)
largeBuffer.fill('A');
return largeBuffer;
})(),
}
await Document.upsert({
name: payload.documentName,
data: payload.state,
}) CREATE TABLE `Document` (
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`data` longblob NOT NULL,
`createdAt` datetime NOT NULL,
`updatedAt` datetime NOT NULL,
PRIMARY KEY (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Were you able to sort this issue out? One thing I would be curious about is whether a simpler query / response works fine against the PlanetScale database as one test, to ensure your TLS configuration for the connection is working ok and that the issue isn't related to the connection itself having trouble being established. If that is working as expected, then I would be curious if you reduce the size of your data blob to 1 MB if that would work. If it does, then I would be curious how much you can then increase it before the original error you shared appears again. You may also reach out via https://support.planetscale.com/ to receive further assistance if needed. |
Beta Was this translation helpful? Give feedback.
-
@orware thanks for replying. I think there is nothing wrong with the mysql connection itself. We use planetscale database in production environment and execute a lot of various query statements. There has been no problem before until some individual case user data swelled to such a large size. I have write a test script, and i found some new errors. https://github.com/shincurry/debug-mysql-vitess-large-blob-issue
Try to insert a record contains 40MB blob data
Try to insert a record contains 30MB blob data
|
Beta Was this translation helpful? Give feedback.
-
Our server decided to use LONGBLOB storage because we saw in planetscale's documentation that it supports up to 4GB LONGBLOB columns. https://planetscale.com/learn/courses/mysql-for-developers/schema/long-strings#blob-columns . The data we store in mysql BLOB columns is not immutable files like images, audio or video, but user business data compressed into binary data (similar to JSON strings compressed into binary) So, this is not a bug, nor is it that I am using it incorrectly, but simply that planetscale does not support BLOB column storage over 30MB? It is not difficult to migrate to services like object storage, I just hope to get a clear answer. Maybe planetscale should indicate the actual supported data size in the documentation. Anyway, thank you very much for your reply. |
Beta Was this translation helpful? Give feedback.
@shincurry,
The link you shared from our course primarily covers regular MySQL's capabilities, without our extra Vitess layer included which does add some restrictions / limitations.
While MySQL itself can support large
LONGBLOB
values, within the PlanetScale environment you will experience error messages such as the ones you encountered since the Vitess layer has a maximum message size set for the communication between MySQL + Vitess and this is also important for the overall scalability of a database in the future as it grows.The recommendation to migrate these particular column values into object storage would be the best one I can share and with that data no longer needing to be pull…