Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error deleting monitor #5373

Closed
1 task done
sb2050 opened this issue Nov 24, 2024 · 8 comments
Closed
1 task done

error deleting monitor #5373

sb2050 opened this issue Nov 24, 2024 · 8 comments
Labels

Comments

@sb2050
Copy link

sb2050 commented Nov 24, 2024

πŸ“‘ I have found these related issues/pull requests

no one found

πŸ›‘οΈ Security Policy

Description

I get an error message when deleting a monitor.

πŸ‘Ÿ Reproduction steps

delete a monitor

πŸ‘€ Expected behavior

it shout delete the monitor

πŸ˜“ Actual Behavior

Uptime Kuma stop working and I see the error message in the log.

🐻 Uptime-Kuma Version

1.23.15-debian

πŸ’» Operating System and Arch

AlmaLinux

🌐 Browser

Firefox

πŸ–₯️ Deployment Environment

  • Runtime: Docker version 27.3.1, build ce12230
  • Database: -
  • Filesystem used to store the database on: ext4
  • number of monitors: 40

πŸ“ Relevant log output

uptime-kuma  |     at process.unexpectedErrorHandler (/app/server/server.js:1905:13)
uptime-kuma  |     at process.emit (node:events:517:28)
uptime-kuma  |     at emit (node:internal/process/promises:149:20)
uptime-kuma  |     at processPromiseRejections (node:internal/process/promises:283:27)
uptime-kuma  |     at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
uptime-kuma  | If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
uptime-kuma  | Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
uptime-kuma  |     at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
uptime-kuma  |     at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
uptime-kuma  |     at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
uptime-kuma  |     at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:572:22)
uptime-kuma  |     at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:558:22)
uptime-kuma  |     at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:593:19)
uptime-kuma  |     at async Settings.get (/app/server/settings.js:54:21)
uptime-kuma  |     at async UptimeKumaServer.getClientIPwithProxy (/app/server/uptime-kuma-server.js:313:13)
uptime-kuma  |     at async Object.allowRequest (/app/server/uptime-kuma-server.js:122:34) {
uptime-kuma  |   sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
uptime-kuma  |   bindings: [ 'trustProxy', 1 ]
uptime-kuma  | }
@sb2050 sb2050 added the bug Something isn't working label Nov 24, 2024
@CommanderStorm
Copy link
Collaborator

The database performance issue should be resolved in V2. Please consider testing the beta ^^

@CommanderStorm CommanderStorm added help and removed bug Something isn't working labels Nov 24, 2024
@sb2050 sb2050 closed this as completed Nov 26, 2024
@luisfavila
Copy link

luisfavila commented Dec 5, 2024

@CommanderStorm I've been using the v2 beta since it's release, and was using master previously. I get the same error a few times per week Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?

@CommanderStorm
Copy link
Collaborator

What is your metadata (instance size, database used, storage, amount of monitors, ..).
Does it only happen when deleting monitors or generally?

@luisfavila
Copy link

luisfavila commented Dec 9, 2024

What is your metadata (instance size, database used, storage, amount of monitors, ..). Does it only happen when deleting monitors or generally?

Sorry for the late response. I have ~35 monitors divided over 3 monitoring groups. Using MariaDB replicated with galera.
Get the error generally (on the monitoring logs as downtime), not when deleting. Is there a better ticket to discuss this, or should I open a new one even?

@CommanderStorm
Copy link
Collaborator

Let's open a new one and please go into detail how your galera setup looks like. Don't think this is the same issue

@louislam
Copy link
Owner

MariaDB replicated with galera

Please note that Uptime Kuma have not optimized for Galera Cluster. And for my experience, Galera Cluster's write performance is really bad, which could possibly makes MariaDB even slower than SQLite.

@luisfavila
Copy link

Understood. It seems to happen mostly at the same time every day, which means the DB is probably getting overloaded. @louislam @CommanderStorm Would you consider adding a setting somewhere so we can tell Kuma not to consider these database errors as the monitor being "down"? That'd be quite helpful in my situation.

@CommanderStorm
Copy link
Collaborator

CommanderStorm commented Jan 9, 2025

You can enact an maintenace window for when you don't want something to count against downtime and don't send notifications..

On v2 you might not have such grave issues, but if writes are really so constrained, all bets are off.

Also, please only ping if something is important. Getting so many push-notifcations is tiring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants