-
-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Web App Not Functioning - Database Locked #341
Comments
Hey @joe-eklund
|
Web app says
Using Hub Spoke deployed through Docker.
amd64. Using the following images:
I do have hourly backups that are run through duplicacy. Duplicacy backs up my home folder (where the folder for Scrutiny lives) to a ZFS pool on the same host. |
From what I've read, this shouldn't be happening unless there's multiple connections to the SQLite DB. The really weird thing is that even if there was multiple processes locking up the DB file, it should be a transient error, eventually the other app should close its connection to the app. |
Sorry took me a few days to get back to you. On vacation. :) The only other thing is I do have another scrutiny collector from another machine (so two total) that is reporting back to the single scrutiny web instance. But I thought that was reporting directly through the scrutiny web container, so it shouldn't really be another "connection" to the DB right?
Yes I agree. I will note the web app has been working since I restarted it on Saturday (4 days ago), but it has become frozen twice before over the past couple weeks. I will continue monitoring it and checking the logs to see if something more useful comes up to help with reproduction. |
can confirm the issue, using podman - everything are latest version. At start everything works, but giving the tool to work for several hours - make it in the same DB locked state. As i'm running containers, access to a file is guaranteed to be exclusive to only one application. Configuration - 1 UI, 2 collectors. What more interesting, being unable to push metrics - collectors crashing as well.
In theory, if application accessing the database from several different threads - it is same as it is accessed in a parallel. Given that UI reader and another thread writer might work in different threads, it could cause an issue when threads meats in write/read operation. It it is indeed the case, moving from Journal to WAL might help( i.e. PRAGMA journal_mode=WAL;) |
When a transaction cannot lock the database, because it is already locked by another one, SQLite by default throws an error: database is locked. This behavior is usually not appropriate when concurrent access is needed, typically when multiple processes write to the same database. PRAGMA busy_timeout lets you set a timeout or a handler for these events. When setting a timeout, SQLite will try the transaction multiple times within this timeout. https://rsqlite.r-dbi.org/reference/sqlitesetbusyhandler retrying for 30000 milliseconds, 30seconds - this would be unreasonable for a distributed multi-tenant application, but should be fine for local usage. added mechanism for global settings (PRAGMA and DB level instructions). fixes #341
quick update for everyone. I've been able to reproduce this error locally. Its definitely caused by concurrent requests to the db. I've fixed the issue via the following commit in the beta branch - a1b0108 The beta branch is still a bit unstable, so please don't switch to it, but you should see the fix in v0.5.0, which should be released in the next couple of days. Thanks for all your help debugging this! |
|
Just tried it out a few times, no longer hitting this issue! |
Describe the bug
After the web app is running for a little while, I am getting a database locked error message in the logs and the web application is not loading. Spinning down the container and spinning it back up makes the app functional for another few day(s) until I decide to check it again and it's broken.
Expected behavior
The web app loads.
Log Files
Chrome Console:
Scrutiny Web Logs
Any ideas? Thanks.
The text was updated successfully, but these errors were encountered: