Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slowness with 100% CPU utilization #2889

Open
2 tasks done
AyhamZz opened this issue Mar 6, 2023 · 15 comments
Open
2 tasks done

Slowness with 100% CPU utilization #2889

AyhamZz opened this issue Mar 6, 2023 · 15 comments
Labels
area:core issues describing changes to the core of uptime kuma bug Something isn't working

Comments

@AyhamZz
Copy link

AyhamZz commented Mar 6, 2023

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

Description

No response

👟 Reproduction steps

I noticed a huge slowness on my Uptime Kuma monitor and noticed that the CPU utilization is 100%, after increasing the CPU cores it still hits 100%, check the following screenshots:

image

and found the following Docker logs

image

am using Ubuntu 22.04 LTS
image

with the following docker version

image

Uptime kuma is on the latest version and i tried restarting both docker and server
image

👀 Expected behavior

CPU utilization caused by "node" might be the main reason of the slowness.

😓 Actual Behavior

Slowness

🐻 Uptime-Kuma Version

1.20.2

💻 Operating System and Arch

Ubunto 22.04 LTS

🌐 Browser

Google Chrome Version 110.0.5481.178

🐋 Docker Version

20.10.12

🟩 NodeJS Version

No response

📝 Relevant log output

Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: Error: aborted
    at PendingOperation.abort (/app/node_modules/tarn/dist/PendingOperation.js:25:21)
    at /app/node_modules/tarn/dist/Pool.js:208:25
    at Array.map (<anonymous>)
    at /app/node_modules/tarn/dist/Pool.js:207:53
    at async Client_SQLite3.destroy (/app/node_modules/knex/lib/client.js:338:9)
    at async RedBeanNode.close (/app/node_modules/redbean-node/dist/redbean-node.js:375:9)
    at async Function.close (/app/server/database.js:435:13)
    at async Object.shutdownFunction [as onShutdown] (/app/server/server.js:1771:5)
    at process.<anonymous> (/app/server/server.js:1794:13)
    at process.emit (node:events:525:35)
    at emit (node:internal/process/promises:140:20)
    at processPromiseRejections (node:internal/process/promises:274:27)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues

@AyhamZz AyhamZz added the bug Something isn't working label Mar 6, 2023
@louislam

This comment was marked as resolved.

@AyhamZz

This comment was marked as resolved.

@webartifex
Copy link

webartifex commented Mar 8, 2023

I can confirm @AyhamZz 's observations.

For the past couple of days, Uptime is very, very slow.

For example, when you open it, it takes up to a minute until the monitors are shown.

I do have a recurring maintenance window. Yet, there were no issues when I set it up about 3-4 weeks ago. Must be a more recent version that causes the problems.

I run Uptime as a Docker container on an Ubuntu 22.04 LTS VM.

Since I am not really envolved on a technical level with your (great) project, @louislam , maybe you tell me how I can contribute to find and fix this bug.

@louislam
Copy link
Owner

louislam commented Mar 8, 2023

@webartifex

  • I recently discovered that if a recurring maintenance ran into a wired state, it will keep generating time slots endlessly which can be slow down Uptime Kuma. My recommendation is you should delete the maintenance first.
  • If you are using Redis monitor, it also causes performance issue. Redis connections are not cleared #2900

@webartifex
Copy link

@louislam

I will try that out, disabling maintenance windows.

However, I have to admit that they are quite a useful feature.

I just found that my local SQLite with all the history was about 2 GB, even though I only monitor the last 7 days with about 50 monitors. Seems to confirm your finding that "it will keep generating time slots endlessly".

@webartifex
Copy link

@louislam

UPDATE: Disabling the maintenance windows works.

Also: Whoever faces the same issue, maybe take a look at the kuma.db file. Some tables may need to be emptied.

@AyhamZz
Copy link
Author

AyhamZz commented Mar 13, 2023

@louislam

Thank you for your response, regarding the maintenance window I didn't use it until now, but it is worth mentioning that the issue just disappears whenever I clear the logs and export the backup file then import it again using overwrite.

I have a good resources attached to my VM
image

But for now, it is stable after what I mentioned, thank you again.

@webartifex
Copy link

@louislam

I saw the number of open tickets. Seems like a lot to do.

Do you plan to fix this maintenance window issue?
It's not urgent for me but would be nice to have in general.
I recommend your tool to a lot of friends starting home labs.

Btw: I bought a 5 Dollar sponsorship. Thanks for your effort.

@louislam
Copy link
Owner

@webartifex It should be fixed in 1.21.0-beta.1, feel free to try if you don't mind to use a beta version.

https://github.com/louislam/uptime-kuma/releases/tag/1.21.0-beta.1

@petercharleston
Copy link

Hi Louis,

I really like your product.

We have run into the same issues with CPU 100+ for the node command.

I deleted the MX schedules and also upgraded to 1.21.2

Still having very slow or not loading UI.

Adding or editing monitors is very very slow.

we have about 3530 monitors running and I doubled the VM spec today to 32GB RAM and 8Vcpu's which did not make a differrence.

DB size is 2234 MB
I set the monitor history to 7 days, it was on 180 days.

Any advise please?
image

image

@petercharleston
Copy link

update.

Followed advise from AyhamZz

export the backup file then import it again using overwrite.

we lost our status pages but this fixed the issue and shrank the DB very low from 2GB

@louislam
Copy link
Owner

louislam commented Apr 5, 2023

we have about 3530 monitors running and I doubled the VM spec today to 32GB RAM and 8Vcpu's which did not make a differrence.

Unfortunately, 3530 monitors right now is too many for Uptime Kuma.

@petercharleston
Copy link

sorry that was meant to be 350 monitors.
All working fine now thanks

@j-f1
Copy link
Contributor

j-f1 commented Apr 6, 2023

If other people run into these sorts of issues, you may be able to help by defining the environment variable NODE_OPTIONS with the value --cpu-prof --diagnostic-dir=/app/data. If you run Uptime Kuma for a short time and reproduce the slowness then exit it gracefully, the .cpuprofile file created in the data directory will provide detailed information about what was going on when it was running slowly. (the resulting profile file can be loaded into Chrome’s dev tools to inspect it)

@Zandor300
Copy link
Contributor

@louislam

Running uptime-kuma 1.21.3.

Experiencing the same slowness on my instance with ~90 monitors, only ping, http and push monitor types.

My db is 5 GB according to the UI and history was on 180 days. Changed it to 30 days but don't know when it will start removing old history?

@j-f1 Tried adding your NODE_OPTIONS to the Docker container but it spat out the following:

-v /root/uptime-kuma-prof:/app/data-prof \
--env NODE_OPTIONS="--cpu-prof --diagnostic-dir=/app/data-prof"
node: --cpu-prof is not allowed in NODE_OPTIONS

@CommanderStorm CommanderStorm added the area:core issues describing changes to the core of uptime kuma label Dec 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants