-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service Discovery for multiple ports/paths on same FQDN #42
Comments
What you need here is the actual FQDN, not just localhost. |
I'm assuming you mean run the client in a separate pod In Kubernetes, and then hook them up via a service. That can mean significantly more configuration, especially if using RBAC, but it's doable. But it still doesn't give discovery for metrics paths or labels, which be significantly nicer than having to write custom relabeling config logic. |
If you're in Kubernetes you can use k8 service discovery, and there should be no need for pushprox as Prometheus will be on the same network. |
We actually have many Kubernetes clusters we need to monitor, and for security reasons we can't reach into them, only out of them, thus PushProx. |
The recommended way to monitor k8 is from inside k8, which will make all of this much simpler. |
I'm aware. The fact is we can't reach into these clusters to get metrics out, to view/alert on them, thus PushProx. The purpose of PushProx is to work around a network barrier just like we have to work around, there's no argument here. This issue is around the idea of better service discovery, requiring less custom code to hit |
I'm saying to run Prometheus inside k8. |
How would I use Prometheus or hook it up to Grafana or other since I can't reach into the cluster? For security reasons, I cannot expose any ingress into the clusters. |
That's a bit of a self-inflicted problem, but you could use tunnels such as pushprox or ssh. |
These are clusters that we standup for clients and then remove our own access to but still need to monitor. It is part of what we do. I don't know why you're arguing with me about it, it seems like this is the kind of situation that pushprox is meant for. Thank you for suggesting pushprox, that's why I'm here in the first place. |
Ultimately pushprox is meant to bypass network restrictions, not be an arbitrary SD system. |
I hear that, but it supports very limited service discovery for a narrow range of use cases. Is there no plan to allow it to broaden? |
Not currently, everything is machine-based. |
Even with separate FQDNs like ...It would be so much easier to just specify the port on the client side. |
With PushProx's current setup, I have to build automation of k8s If instead the client could send it's FQDN/port/label list, no automation would be needed, other than writing the results of GETing |
I'm afraid this is getting a bit out of scope. Also ports will usually be the same in a standard setup. |
There are many setups besides Kubernetes where you might have multiple apps running on one machine, and thus need to expose them on differing ports or paths. A common example might be running both node-exporter and mysql-exporter on one machine behind a NAT. If you don't want to support this, what kind of setup is PushProx meant for? |
The SD is something simple for simple setups, if you've more complex needs you need something fancier. |
Same as @snarlysodboxer , We monitor multiple k8s outside of cluster, Now ,I want to monitor https://github.com/coredns/coredns/tree/master/plugin/metrics But I want to setup one Pushprox to finish works |
Correct me if I'm wrong, but there is no prometheus tool to aggregate So if a use case has multiple endpoints on one machine, the user is going to have to implement this aggregation manually. It seems high value (small change for lots of utility) to let the pushprox client support multiple endpoints directly. |
Yes, that's something to be dealt with on the Prometheus side as always. |
@belm0 To be more clear, Prometheus is the tool to aggregate multiple multiple The reason it doesn't make sense to have an intermediate tool do any work, is that a metrics endpoint may have an identical metric to another endpoint, for example |
Thank you for explaining. Still adjusting from a pushgateway world. |
This isn't something I'm planning on adding, if your system is this complicated then you need a proper SD solution beyond the scope of this binary. |
This is something we would also use if it existed. We want to allow users to run PushProx outside of our network boundaries and control which exporters they run without needing to coordinate with us if some hosts run some exporters and others don't. We want to keep the footprint of what is required on their side minimal and so PushProx is attractive to us. We'd be interested in working on this if a PR might be accepted. I haven't done extensive research yet, but I was thinking this might look something like taking multiple instances of a |
Some use cases may need to scrape multiple endpoints on the same FQDN, such as
localhost
.For example, in a Kubernetes pod you might have multiple metrics endpoints to scrape,
localhost:9100/metrics
,localhost:9101/my-metrics
, etc.You can't just run another copy of the client and still get service discovery, since the client only forwards the FQDN.
What do you think about allowing the client to specify labels, ports, and metrics paths for discovery?
One way to do this might be changing
--fqdn
to--endpoint
and maybe allowing it to be specified multiple times. An example--endpoint
value might behttp://localhost:9100/metrics?app=myapp
, whereapp=myapp
would be turned into labels for discovery.I can work on a PR if you think this or similar is a good idea.
Additionally, the same values could optionally be used for security, as talked about in PR #41.
The text was updated successfully, but these errors were encountered: