Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Treat custom textfile metric timestamps as errors #769

Merged
merged 1 commit into from
Feb 27, 2018

Conversation

juliusv
Copy link
Member

@juliusv juliusv commented Dec 24, 2017

This is clearer behavior and users will notice and fix their textfiles faster
than if we just output a warning.

This is clearer behavior and users will notice and fix their textfiles faster
than if we just output a warning.
Copy link
Member

@discordianfish discordianfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arrgg darn review feature. I wrote that comment weeks(?) ago.

if m.TimestampMs != nil {
log.Errorf("Textfile %q contains unsupported client-side timestamps, skipping entire file", path)
error = 1.0
continue fileLoop
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a big fan of continue-to-label. What about moving checking of parsedFamilies to a function this is called here so you can continue without label?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, it would be nice to avoid this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@discordianfish Can we merge this and fix it later?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm I don't like this. But if you prefer, it's okay with me but let's open an issue for this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I would like to get 0.16 out asap, and this is worth getting into 0.16.

@SuperQ
Copy link
Member

SuperQ commented Feb 22, 2018

Ping, it would be nice to get this into 0.16.0. 😸

@SuperQ SuperQ merged commit 864a6ee into master Feb 27, 2018
@SuperQ SuperQ deleted the custom-timestamp-hard-error branch February 27, 2018 18:43
@SuperQ SuperQ mentioned this pull request Feb 27, 2018
@shollingsworth
Copy link

shollingsworth commented Apr 13, 2018

Hi all thanks for the work you do!

I was using the timestamp feature (possibly erroneously) wondering if including timestamps in textfile processing has been completely dropped, or if there is another place to put it?

Noticed it's still documented here: https://prometheus.io/docs/instrumenting/exposition_formats/

TIA

Stevo

EDIT:
Looks like there is a lot of discussion around this topic, the rabbit trail led me down the following links:

Sounds like this is a non-trivial issue that might be suited for a different approach.

Trying to figure out what that approach is.

@SuperQ
Copy link
Member

SuperQ commented Apr 13, 2018

@shollingsworth That's a really great summary of all the issues, thanks for posting that.

If you would like to talk more about your specific use case/approach, I would suggest posting to to the prometheus-users list. It would be a better place to discuss things.

@plin2
Copy link

plin2 commented Jun 28, 2018

Well, we need the timestamp for asynchronous delivery of metrics als well as delayed import because of a prometheus downtine. Especially the maintainance downtimes are a problem. If we have important business metrics flowing into prometheus and we end up with gaps everytime there is a downtime, that's bad. People will start discussing if Prometheus/Grafana was the riught choice.

How/when will we get a possibility to import metrics with timestamps again?

@SuperQ
Copy link
Member

SuperQ commented Jun 28, 2018

@plin2 that will require a custom exporter.

@juliusv
Copy link
Member Author

juliusv commented Jun 28, 2018

@plin2 If you care about not ever having gaps in your metrics, Prometheus is indeed not the right choice unfortunately. Even if you could expose client-side timestamps, you can't really use that for "catching up", as you would have to expose the same series multiple times with different timestamps and values, and the behavior around that is not really defined. The source of the metrics would also have to keep track of which Prometheus servers should be scraping it then, but scrapes are supposed to be idempotent.

@plin2
Copy link

plin2 commented Jul 1, 2018

@julisv thanks, that helped me to get further. I wrote a little perl custom_exporter and was able to go back one hour in time. Older timestamps were not accepted. But at least it gives me a chance for a little maintenance window. Right now we would have only one Prometheus instance scraping the data from that custom_exporter and the script knows when the data was retrieved. Even the client IP address will be known. With that a can design a program logic to either provide live metrics or the complete backlog of metrics thta were not retrieved yet.

What I didnt't find until now is the parameter that restricts the maximum time that I can go back. The docs didn't give me any hint. Do you have a hint for me?

@juliusv
Copy link
Member Author

juliusv commented Jul 12, 2018

What I didnt't find until now is the parameter that restricts the maximum time that I can go back. The docs didn't give me any hint. Do you have a hint for me?

That's a limitation of the TSDB. It only allows appending to head block data that hasn't been persisted to immutable blocks yet, so there's only a window of an hour or so that you can go safely back in time. Theoretically you increase that by setting a longer block time via flags, but that's not really recommended (neither is the entire backfilling use case unfortunately).

@plin2
Copy link

plin2 commented Jul 13, 2018

THX, at least I will give us a one hour maintenance window.

oblitorum pushed a commit to shatteredsilicon/node_exporter that referenced this pull request Apr 9, 2024
This is clearer behavior and users will notice and fix their textfiles faster
than if we just output a warning.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants