-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exporter may freeze if network file system is unavailable #3058
Comments
look issue #2903, this seems to be the expected behavior |
…unt can be deleted after it is successfully executed in goroutine. prometheus#2903 prometheus#3058 Signed-off-by: joey <zchengjoey@gmail.com>
…unt can be deleted after it is successfully executed in goroutine. prometheus#2903 prometheus#3058 Signed-off-by: joey <zchengjoey@gmail.com>
…unt can be deleted after it is successfully executed in goroutine. prometheus#2903 prometheus#3058 Signed-off-by: joey <zchengjoey@gmail.com>
I'm not sure if this behavior is expected, because in my test case the result can vary from 10.2s to an unknown amount of time, which in both cases is longer than the default timeout for Prometheus. I discovered this problem after another network outage on my VM. During the outage, local Prometheus didn't collect any node exporter metrics because the |
see #3063, I tried to fix the timeout not working issue, and #2903 was reverted because it was necessary to ensure that it could be removed from stuckMounts after successful execution. |
f7b3413 works for me. |
Host operating system: output of
uname -a
Linux vm-ubuntu-22 5.15.0-1060-kvm #65-Ubuntu SMP Tue May 21 09:31:15 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
node_exporter version: output of
node_exporter --version
node_exporter command line flags
--collector.filesystem.mount-timeout=1s
--log.level=debug
node_exporter log output
Log
Are you running node_exporter in Docker?
No.
What did you do that produced an error?
fstab
on the first VM.mkdir /media/test1
on the first VM.mount /media/test1
on the first VM.time curl -s http://localhost:9100/metrics | grep node_scrape_collector_duration_seconds
.If you don't like cifs, replace it with sshfs.
What did you expect to see?
The exporter should respond in a about 1 second.
node_scrape_collector_duration_seconds{collector="filesystem"} 1.20955266
What did you see instead?
The exporter may freeze for an unknown amount of time, but may respond in 10 seconds.
The text was updated successfully, but these errors were encountered: