-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net_host_name of etcd metrics has different data formats in ipv4 and ipv6 #15400
Comments
I might be mistaken, but net_host_name is populated from Prometheus/opentelemetry? |
Agree I think this is upstream of etcd. Here is the open telemetry conventions ref: https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/span-general/#nethostname Which version of etcd did you encounter this in? I ask as it looks like this issue may have been fixed upstream but we may have been using an older version of otel-go for a while? For example Refer: https://pkg.go.dev/go.opentelemetry.io/otel?tab=versions Edit: @ahrtr If I'm on the right track should we backport a version bump for otel dependency? I'm happy to raise if so. |
For this case, the labels/attributes (e.g. We keep bumping dependencies for main (3.6), but not for |
Hey @B-Betty - I'm working on recreating this now, can you please provide snippets of your prometheus & open telemetry collector configs to confirm how you are consuming the etcd metrics? Thanks! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions. |
Closing - This was fixed upstream in OpenTelemetry-Go 1.1.0: https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md#110---2021-10-27
Last month we updated |
What happened?
The value of net_host_name in Etcd metrics is the IP in the ipv4 environment, and in the ipv6 environment: ip:2379.
Is this a bug? How to solve it?
ipv4:
etcd_server_is_leader{http_scheme="http", instance="177.177.77.22:8889", job="otel-collector", kubernetes_node_name="node154", net_host_name="10.99.xx.xx", net_host_port="2379", service_instance_id="10.99.xx.xx:2379", service_name="etcd"}
ipv6:
etcd_server_is_leader{http_scheme="http", instance="[fd00:177:177::a68a]:8889", job="otel-collector", kubernetes_node_name="node1", net_host_name="202:202::xx:xx:2379", service_instance_id="202:202::xx:xx:2379", service_name="etcd"}
What did you expect to happen?
Is this a bug? How to solve it?
How can we reproduce it (as minimally and precisely as possible)?
curl http://ip:2379/metrics
Anything else we need to know?
No response
Etcd version (please run commands below)
The text was updated successfully, but these errors were encountered: