-
Notifications
You must be signed in to change notification settings - Fork 2
Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction #1551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Just wanted to note that I saw this again recently, so there is definitely something going on, but it's hard to reproduce. |
I came here to say I'm experiencing the exact same issue from time to time. Will look into it. |
I have a live site where this error was occurring. My suspicion was that it would occur if the site was getting hammered with traffic or spiked in some kind of way. Two weeks ago after my last comment, I made a local copy of it running a similar LAMP stack. I made a php script that simply creates 1000 pages that utilize FieldtypeTable and I ran that script at the same time in 3 separate terminals. I was able to reliably and quickly replicate the error. I then tried some things to see if I could replicate it in the same way in a clean ProcessWire installation, but I wasn't able to dedicate much time to it and while I can't state this for a fact, it seemed to not be occurring. Today, for reasons not related to this issue, I decided to upgrade my dev server from MySQL to MariaDB in Ubuntu 22.04. To do this, I had to dump all my databases from MySQL (skipping the various system tables), uninstall it, install MariaDB and then import the dump. Before I could import the mass dump, I had run this on the dump file because in a default installation,
I then imported my dump and made sure everything worked. My motivation to switching to MariaDB was due to importing database dumps being a lot faster (I often sync my live sites to dev). I wasn't sure why this was the case originally, but afterwards I learned it's because binary logs in MariaDB are disabled by default, while they are enabled by default in MySQL (at least that's the default settings when installing it with apt in Ubuntu). I then turned my sights on to this issue again and conducted the same test and it seems to no longer occur. |
@adrianbj It might be worth a try disabling binary logs in MySQL to see if that resolves this issue. |
@adrianbj I enabled binary logs in MariaDB but no issues there either. |
OK another update. Despite switching to a new production server with Ubuntu 24.04 and MariaDB (and with binary logs disabled by default), this issue is still occurring. On my dev server which also now has MariaDB I wasn't able to replicate the issue like I was able to originally. Will need to dive deeper into this again. |
I don't know if this related, but when I view MariaDB logs on my new production server for the site that's experiencing this deadlock issue, I see this:
Googling lead me to this GitHub post: Based on a commenter, I see that my
I increased it by adding this line to
I then restarted MariaDB and verified the new value is now active. I didn't run this yet however:
I've cleared my error and exceptions log file in ProcessWire. Will report back in a day or two to see if the deadlock issue is still occurring (happens about 1-2 times a day, which is less than before). |
@jlahijani Hello Jonathan. Is there any correlation between the code which is triggering the deadlock exception (might need to go back in the stack trace), and the two fields with incomplete FTS size you mentioned in your post above? ( |
This post on Stack Overflow may help with your info-gathering, @jlahijani |
OK... I came across this issue again and I've resolved it. First, this has nothing to do with using a FieldtypeTable. It has everything to do with MySQL/MariaDB just being completely inundated and not being able to keep up. I'm not sure if that falls within the hands of how ProcessWire is doing queries or MySQL/MariaDB itself. I was able to trigger this issue reliably with a site that I'm working on and went to work figuring out a fix. I first went with the approach of trying to modify a bunch MySQL/MariaDB settings but didn't have any luck there. Then ChatGPT recommended as a final resort to simply catch the exception (which ProcessWire already does), but retrying the transaction instead of throwing exception, which seems to work around and effectively resolve this issue. In ProcessWire 3.0.247 /wire/core/FieldtypeMulti.php we have this bit of code: try {
$result = $query->execute();
} catch(\Exception $e) {
$exception = $e;
} We can replace that with this: $maxRetries = 5; // Number of times to retry
$attempt = 0;
while ($attempt < $maxRetries) {
try {
$result = $query->execute();
$exception = false; // If successful, clear exception
break; // Exit the loop if successful
} catch (\Exception $e) {
if ($e->getCode() == 40001) { // Deadlock error
$attempt++;
usleep(100000 * $attempt); // Exponential backoff (100ms, 200ms, 300ms, etc.)
} else {
$exception = $e; // Store non-deadlock exceptions
break; // Exit loop for other exceptions
}
}
} @ryancramerdesign What do you think about that? It may be hard for you to replicate the issue, but even if you can't, would you consider retrying 40001 errors? |
@jlahijani and @ryancramerdesign - there is also this related issue #2038 with another mysql error that needs retrying. I think this is desperately in need of being made more robust. |
Short description of the issue
I received the following exception when updating an existing row in a Profields Table field.
ProcessWire\WireDatabaseQueryException: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction #40001 in /var/www/html/wire/core/FieldtypeMulti.php:253 caused by PDOException: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction #4000
Steps to reproduce the issue
This is the code that triggered this error, but I have been using this code for a long time and this is the first time I have seen this, Given the nature of the error, I am not really surprised though as it's probably all about timing. Anyway, I don't know how you can really reproduce this, but
Here's what looks to be the relevant parts of the stack trace.
I'm not completely sure, but perhaps there is some useful info here: https://www.drupal.org/node/1369332 ? There are many other references to / solutions for this error, but it's not a one size fits all issue. Let me know if there is anything else I can provide.
Setup/Environment
Server Details
Server Settings
GD Settings
iMagick Settings
Module Details
The text was updated successfully, but these errors were encountered: