-
-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Prometheus blocky_prefetch_domain_name_cache_count showing inconsistent values if bootstrap is enabled #891
Comments
It looks strange... Do you see some errors/warnings in log? |
No, there aren't any errors in the log - e.g. this is the log from today:
This is the stats for blocky_prefetch_domain_name_cache_count from the last two hours: Everything else looks quite "normal": |
So I analyzed a little bit more:: Directly after the start of blocky some values in the Prometheus endpoint initializes with the value 7:
After some seconds they are changing to the first "real" values:
I'm using blocky 0.20 with the latest docker image. |
I could reproduce the problem. It occurs only if bootstrap is enabled. Bootstrap uses internally also caching resolver with prefetching. We do also have 2 different caching resolvers with prefetching and both of them are emitting metrics. Sometimes you can see "7", I assume this is the amount of domains (list downloads and upstream urls) which the bootstrap resolver are currently handling. I deactivated metrics in bootstrap, it should work as expected now |
Hello,
I'm using blocky 0.20. After playing a little bit around with the prefetch configuration and looking on the prometheus stats I saw that the value blocky_prefetch_domain_name_cache_count sometimes jumps down to the value "7" and than after a few seconds up again to the value before. You can see that here directly on the graph:
Since it doesn't seem for me that the prefetch cache is cleared, is it only a reporting error of the prometheus endpoint?
Here you can see my current config:
Kind regards.
The text was updated successfully, but these errors were encountered: