-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics only produced for few initial scrapes #163
Comments
Hey @jakob-reesalu, I was pretty busy with my daily job for the past two weeks. I'll take a look at this issue and also will follow up on the other one. 👍 |
@burningalchemist Ya no worries! |
Update: For example: I refresh the /metrics page and it just gets stuck loading, then I bring up the command window and press up or down arrow and suddenly I see logging about collected metrics, at this point the /metrics page stops loading and presents the metrics. So, coming back from the week-end I had logs showing "error gathering metrics", until I pressed up/down arrows in the command window, then the exporter logged that metrics were gathered:
|
Actually! I'm running in Powershell and googled on this, found out this is due to a Powershell setting described in this answer: https://serverfault.com/questions/204150/sometimes-powershell-stops-sending-output-until-i-press-enter-why So that's solved now and no Sql-Exporter issue :) |
Hey @jakob-reesalu, did you eventually figure out the configuration? I believe in the end it's related to the timeout configured on the Prometheus side. If the connection is cut after 15s, sql_exporter also cancels the request since there's no receiver to respond to. I'm going to close the issue as stale/solved. Feel free to reopen it. 👍 Cheers! |
Hey man @burningalchemist! I don't recall exactly what I did in the end. Unfortunately I haven't reached the "end" fully as we're still having issues with the exporter. =/ As of right now we're getting one or perhaps some scrapes for a day, then Prometheus alerts that the exporter is down but the service is still running on the DB machine. Even if the service is running the /metrics page shows errors though, instead of the metrics that it previously managed to get. Not sure if our issue relates to what you've added in the start page: "Configuration The thing is that the database is up and running in our case so it doesn't seem to be unreachable. I'll see if I get time to come back to this in the future. |
@jakob-reesalu Got it, thanks! Yeah, please re-open once you have some time, and maybe we can run over it again, I'm happy to assist. In the meantime I'll think about it. There might be many factors when the query is long. |
Describe the bug
One of my collectors produce metrics during a few initial scrapes, like 4-5, then no more metrics are shown on the /metrics endpoint. The collector uses the query mentioned in this issue: #154 (comment), but since the query now has proved to work, I suspect there must be some other issue at play here.
To Reproduce
Expected behavior
Metrics produced every 6 hours according to min_interval, see below config. Even if the query fails to produce new metrics, surely the cached metrics should still show up in the /metrics endpoint, no?
Configuration
Prometheus config:
Sql-exporter config:
Note the min_interval of 6h, so scrapes are only done every 6 hours
Not sure, but might the issue be that Prometheus
scrape_timeout: 15s
cuts off the Sql-exporterscrape_timeout: 1h
, so that any sql-exporter query running for longer that 15s fail? Do I perhaps need to manually setscrape_timeout: 1h
on the sql-exporter job in Prometheus?Additional information:
One of my sql-exporters would after a day or so not provide any metrics at all, but only errors on the /metric endpoint. It would say like
Error gathering metrics: [from Gatherer #1]
, for each collector, saying that "context deadline exceeded". Not sure if this is related or not?I have been running sql-exporter as a windows service so don't have any any logs of these issues. Now I've restarted them running from command line with debug logging, to monitor this further.
The text was updated successfully, but these errors were encountered: