-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add complete documentation #1045
Comments
After reading the code I found So the actual problem seems rather that the whole Python module is basically not documented at all. https://prometheus.github.io/client_python/ can at best count as a quick how-to, but even that doesn't seem to cover the most simplest and probably regularly used stuff (like clearing time series with labels that no longer exist), not to talk about general concepts like the registries or proper documentation of all classes, methods and their params. :-( |
Re-dedicating this issue, asking for some more complete documentation ;-) |
Another area that seems completely undocumented is The example given at https://prometheus.github.io/client_python/collector/custom/ doesn't even work, as There's no telling that CustomCollectors are automatically called (and do their thing in No word about what
I one wants to do both, then it seems (I had to gather all that information from blogs and 3rd party websites... all doing things a bit different... some making Having basically no proper documentation is really a pity, as the Python client seems actually to be quite neat, but if people have to read the whole sources or try their luck on google with questionable outcome, they probably simply won't use all the nice features in a proper way. |
#1021 is another issue focusing on API documentation that would hopefully make things like I hesitate to have an issue for just "complete documentation" since that is a moving target as more features are added. |
It's at least difficult IMO to have many small issues for just single points that seem to be missing from the documentation, because if I add another issue for every thing I find presumably missing, I'd spam the issue tracker ;-) Like there was the point of It would further e.g. be nice to document that:
Another thing, which lacks documentation is IMO, what And there are many little bits and pieces:
And at least I'm kinda lacking some more concrete best-practises on how do design metrics. I know there are a few documents on that, but they're rather very abstract IMO. |
What's IMO also confusing is: My understanding was that each object of e.g. Edit:
I would interpret this as e.g. Also, it's unclear why/when one would use |
Yet another ;-) ... part that should IMO be documented is the following: With direct instrumentation the typical usage seems to be:
With custom collectors, one could, too do something like above e.g. as in:
where I create one But the outcome of that is at best unexpected: First scrape:
Second scrape:
Third scrape:
Seems like it would just store all the samples and print all... even for the same metric+labelset combinations. Instead, it seems, that – unlike with the direct instrumentation usage – one has to always re-create the metric objects like in:
in order to get reasonable results. |
Occasionally you find some issues complaining about things and nobody from the project says anything, and you can see the requestor just kinda go off on their own for a while and gradually make less and less sense, clearly on a tear about something only they care about. I think this is the first time I've seen someone do exactly the opposite of that, and @calestyo, thank you so much for fleshing out this issue - there is more legitimate discussion of how the hell anyone might actually use this module in this thread than there is in the official documentation. The docs for this module read like a "clever" person more invested in showing off how clean and clever their solution was than in demonstrating how it might be useful to anyone. The examples are childishly simple, so I suppose it's possible that the framework is actually clean; but they're also so woefully incomplete that it's entirely unclear how anything in the examples results in a recognizeable label or metric of any kind. Some of the object methods appear to be created by the example code, but it's really difficult to be sure because nothing is ever explained. Why I would want to create a method by that name is anyone's guess; even the "three-step demo" only registers two values and then describes how they're used incorrectly to show nothing of value. In other words, the docs are a lot of "look what I can do," when the point of good documentation ought to be "look what you can do." The fact that a user of the framework can casually, in fits and starts over the course of a single week, actually put more useful information into the ticketing system for the project than the project has ever published in its own literal documentation ought to be a point of deep shame. In over twenty-five years' experience in the open source arena, this is some of the worst documentation I have ever seen, to the point that it might actually be an improvement to have none at all. |
@greatflyingsteve Please keep in mind that this is an open source project and any efforts (development and documentation) are provided for free. Typically it also doesn’t help encouraging people to make further contributions, if they’re blamed with harsh words (to say it diplomatically) like "deep shame". |
@calestyo I understand what you mean, and I appreciate your civility. What I'm trying to get across - besides my obvious frustration, that is - is very specific, though. I write code. It's not very good code, but I write code, or I wouldn't be here. But I also write stories, and forum posts, and tons of other things. The challenge in good writing isn't saying enough, the challenge is not saying too much. From most developers whose docs I've read, your point that "one is already an expert in the system" is spot on: they write at high granularity, and tend to over-describe or dive off into all the cogs and wheels that make their project work. If they're bad at docs (but do write them), the docs have so much detail that they're only half a level removed from reading the code itself. In the same vein, if I can write a thousand words on a some topic in an hour, writing six hundred words on the same topic takes two hours. Opening the floodgates is easy; pulling the verbosity back out is hard. It takes editing. These docs are "bad" in a way that takes effort. They have been edited down to the point of being maddeningly terse. They focus on use cases involving poorly-understood (but incredibly cool) language mechanics, but then say nothing at all about how or why that method works, or even what it does. There are code examples, but intentionally bad and useless examples that have no resemblance to a real use case, barely describe the outcome, and say nothing about why or how this is useful. In fact, most of the examples contain no output, such that it's almost impossible to tell what the example might actually do, or how the visible primitives actually translate to anything in the Exposition Format. I'm not saying there's bad faith, but there is a specific flavor of incompleteness that feels like there is, clearly written between the lines, a message that "you might figure this out if you're as clever as I am," in the same way that people who craft advanced logic puzzles intentionally provide only the smallest stubs of a clue. The difference, of course, is that people trying to solve logic puzzles are signing up for the difficulty, and find it desirable. If one is instead making tools, additional difficulty is a bug, not a feature; and this is, very decidedly, a tool. The project I was so frustrated about when I wrote previously is done. The results I wanted turned out to be easier than expected. The library itself is really good, but I struggled harder for having read the docs instead of the code, and I was already familiar with both Prometheus and Python. It would have been easier to use overly-specific docs generated straight out of the docstrings. Unwittingly signing up to solve logic puzzles with intentionally minimized stubs, when one should reasonably expect instead to find clues in plenty, is a frustrating experience. I am almost frustrated enough to go fork and come back with a PR. |
This is an open source project ran by volunteers and contributors. It is not helpful to say things like "intentionally bad and useless" as that is certainly not the intention by anyone.
If the docs are frustrating to you, please open a PR to improve them! It shouldn't require you to be "frustrated enough" to improve something, we are always open to improving this library, and docs are a great place to start. |
Hey.
It would be nice if there was something like a
write_to_stdout()
function, or ifwrite_to_textfile()
would somehow allow writing to stdout.For debugging that would seem to be much more helpful. Also, distros may have their own unified means of collecting the
.prom
files and thus even in production it may make sense to let a program simply write the metrics to stdout.Cheers,
Chris.
PS: Using
/dev/stdout
doesn't work, as it tries to write it's tmpfile there.The text was updated successfully, but these errors were encountered: