-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to start monitor with KnexTimeoutError: Knex: Timeout acquiring a connection. #4153
Comments
Considering you are using a router, it's likely you are using a USB attached storage? It's likely that the database has gotten too large that the increased IO latency is too much for the application to handle. |
It was running USB attached storage, but it's a fast drive (DataTraveler Max USB 3.2 Gen 2), it wasn't even utilizing the disk too much. I moved the container with its data folder to a 8th gen Intel NUC running the latest Docker on a dedicated NVME drive hosting the docker filesystem. The issue persists unfortunately. Is there any advice how to reset the app's accumulated data history without losing all 89 configured monitors? I am mostly using HTTP, PING and MQTT checks. |
That's very strange. If you have worked with databases before, you can stop the application, and open the database file |
Hello, I seem to have a similar problem. This morning, without having received any particular notification, I saw that all the monitors had a problem during the night, in the docker logs, the line I'm on a VPS running AlmaLinux 8.9 (Midnight Oncilla) with a 4.18.0-513.5.1.el8_9.x86_64 kernel. Regards, |
Thanks. I did the following which reduced the size of the database from 297M down to 252K.
At least uptime-kuma will start up now, no errors in the logs so far, let's see...... Changed "Keep monitor history data" from 180 days to 7 days. Still, I installed it about 3 weeks ago at most. |
I see a very similar error, running in docker on my Synology NAS:
|
Piling on: I have also been seeing this issue for over a year when I am moving around the app a lot (looking across monitors, editing the dashboards) but it usually self-resolves after a couple minutes of letting things catch up. Today it has not resolved and lead me to this issue. Edit 1:
Edit 2: Edit 3:
|
We implemented incremental_vacuum in 1.23 which should have mitigated this issue. Can you check if you are running the latest version, and if so, are there items in the logs that indicate the incremental_vacuum task has failed? |
@chakflying just confirmed, 1.23.11. I have docker image update monitors, so I would have updated within days to a week from release. |
I think the problem is less about |
What is your retention time in the settings set to? |
@CommanderStorm Took me a few to find that as I'd never changed it. It was set to the default (I assume) of 180 days. I set it to 14 days just now to hopefully avoid this issue again |
Okay. I am assuming your issue is the same @lnagel A lot of performance improvements (using aggregated vs non-aggregated tables to store heartbeats, enabling users to choose mariadb as a db-backend, pagination of important events) have been made in You can subscribe to our releases and get notified when a new release (such as Meanwhile (the issue is with SQLite not reading data fast enough to keep up):
|
🛡️ Security Policy
Description
Startup crash with Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
👟 Reproduction steps
Start container with Uptime Kuma, log in and wait for data to load on the dashboard. Check logs for errors.
👀 Expected behavior
Dashboard would load, monitor checks would be run, log not full of errors.
😓 Actual Behavior
Dashboards do not load any data, monitor checks are not being run, log is full of errors.
🐻 Uptime-Kuma Version
1.23.7
💻 Operating System and Arch
MikroTik RouterOS 7.11.2
🌐 Browser
Firefox latest
🐋 Docker Version
No response
🟩 NodeJS Version
18.18.2
📝 Relevant log output
The text was updated successfully, but these errors were encountered: