-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeletstats receiver: "Get \"https://NODE_NAME:10250/stats/summary\": dial tcp: lookup NODE_NAME on 10.245.0.10:53: no such host" #22843
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@open-telemetry/collector-contrib-triagers |
Apologies for the delay here. Is this the full and correct config file? I'm confused as to why you're getting |
Thank you for your response! The Helm Chart enables the {{- if .Values.presets.kubeletMetrics.enabled }}
{{- $config = (include "opentelemetry-collector.applyKubeletMetricsConfig" (dict "Values" $data "config" $config) | fromYaml) }}
{{- end }}
[...]
{{- tpl (toYaml $config) . }} (from https://github.com/open-telemetry/opentelemetry-helm-charts/blob/3471a2afe2d5e01d23b4bc02f62ef70077c7dcc7/charts/opentelemetry-collector/templates/_config.tpl#L43-L45 and https://github.com/open-telemetry/opentelemetry-helm-charts/blob/3471a2afe2d5e01d23b4bc02f62ef70077c7dcc7/charts/opentelemetry-collector/templates/_config.tpl#L52, the version of the Helm Chart I tried might have been different)
{{- define "opentelemetry-collector.applyKubeletMetricsConfig" -}}
{{- $config := mustMergeOverwrite (include "opentelemetry-collector.kubeletMetricsConfig" .Values | fromYaml) .config }}
{{- $_ := set $config.service.pipelines.metrics "receivers" (append $config.service.pipelines.metrics.receivers "kubeletstats" | uniq) }}
{{- $config | toYaml }}
{{- end }}
{{- define "opentelemetry-collector.kubeletMetricsConfig" -}}
receivers:
kubeletstats:
collection_interval: 20s
auth_type: "serviceAccount"
endpoint: "${env:K8S_NODE_NAME}:10250"
{{- end }} (from https://github.com/open-telemetry/opentelemetry-helm-charts/blob/3471a2afe2d5e01d23b4bc02f62ef70077c7dcc7/charts/opentelemetry-collector/templates/_config.tpl#L151-L163, the version of the Helm Chart I tried might have been different) |
Same problem here Digital Oean K8s The IP from the error is kube-dns svc:
Errors logs:
|
Usually, cloud provider managed compute services have a private DNS setup for vms, and then the node's local domain is appended to the
and then in the kubeletstats receiver
I would suggest opening up an issue in the chart repo to add using the node ip as an option and closing this one. This isn't something to fix in the receiver, instead it can be resolved with changes to the config passed to it. |
@devurandom Let us know if any other information would be helpful! Otherwise, feel free to close the issue and open another one in the chart repo as @jinja2 suggested. |
I'm going to close this issue for now, but please feel free to let us know if you have any other questions. |
Not all k8s deployments resolve the node host name to an IP. This is generally true inside of cloud providers, but not necessarily the case when self-hosting. Fixes open-telemetry/opentelemetry-collector-contrib#22843
Not all k8s deployments resolve the node host name to an IP. This is generally true inside of cloud providers, but not necessarily the case when self-hosting. Fixes open-telemetry/opentelemetry-collector-contrib#22843
For those interested, I opened up a PR in the charts repo here: |
The above configuration was effective for me in addressing issues I was experiencing on DigitalOcean. Since I was too busy to wait for a new chart version to be deployed, I added the following environment variable configuration to the opentelemetry-collector-daemonset Helm chart and used it with the existing chart version: extraEnvs:
- name: K8S_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP and then in the kubeletstats receiver receivers:
kubeletstats:
endpoint: "${env:K8S_NODE_IP}:10250" |
Component(s)
receiver/kubeletstats
What happened?
Description
I set up OpenTelemetry Collector as a agent on each node using the Helm Chart and the
values.yaml
template below. Thekubeletstatsreceiver
tries to resolve the Kubernetes node name that the Helm Chart injects as its endpoint via the cluster's DNS, but fails (see logs below).This seems similar to how the Kubernetes metrics-server tried to do the same and then was changed to resolve the node via the Kubernetes API and the
InternalIP
field of the node status.Steps to Reproduce
See
values.yaml
file below. I deployed it to a DigitalOcean Kubernetes cluster.Expected Result
kubeletstatsreceiver
should "just work".Actual Result
See the error message and stack trace below.
Collector version
0.77.0
Environment information
Environment
OS: DigitalOcean Kubernetes
OpenTelemetry Collector configuration
Log output
Additional context
Is there a workaround, e.g. relying on
metrics-server
orkube-state-metrics
? How would I configure that?The text was updated successfully, but these errors were encountered: