Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HyperError: error trying to connect: invalid dnsname #236

Closed
luong-komorebi opened this issue Oct 15, 2021 · 6 comments
Closed

HyperError: error trying to connect: invalid dnsname #236

luong-komorebi opened this issue Oct 15, 2021 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@luong-komorebi
Copy link

I migrate my deployment from docker image of version 2 to version 3. The deployment is done with helm and this is the error I found. It stops the container from starting up and kubernetes pod status is either Error or Crashloopbackoff

We tried our best to see if it was unique to our setup or it may indicate some problems on our side. Unfortunately, we havent been able to figure out and we thought maybe you could help.

Information

  • Agent helm chart version : 203.3.1
  • Logdna version : 3.3.1

Log:

[2021-10-15T18:43:23Z INFO  logdna_agent] running version: 3.3.1
[2021-10-15T18:43:23Z ERROR config::raw] error encountered loading configuration: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })
[2021-10-15T18:43:23Z ERROR config::raw] error encountered loading configuration: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })
[2021-10-15T18:43:23Z INFO  config] using settings defined in env variables and command line options
[2021-10-15T18:43:23Z INFO  config] starting with the following options:
    ---
    http:
      host: logs.logdna.com
      endpoint: /logs/agent
      use_ssl: true
      timeout: 10000
      use_compression: true
      gzip_level: 2
      ingestion_key: REDACTED
      params:
        hostname: "ip-10-0-24-193\n"
        now: 0
      body_size: 2097152
    log:
      dirs:
        - /var/log/
      include:
        glob:
          - "*.log"
        regex: []
      exclude:
        glob:
          - /var/log/wtmp
          - /var/log/btmp
          - /var/log/utmp
          - /var/log/wtmpx
          - /var/log/btmpx
          - /var/log/utmpx
          - /var/log/asl/**
          - /var/log/sa/**
          - /var/log/sar*
          - /var/log/tallylog
          - /var/log/fluentd-buffers/**/*
          - /var/log/pods/**/*
        regex: []
      use_k8s_enrichment: ~
      log_k8s_events: ~
    journald: {}

[2021-10-15T18:43:23Z INFO  state] Opening state db at "/var/lib/logdna/agent_state.db"
[2021-10-15T18:43:45Z ERROR logdna_agent] The agent could not access k8s api after several attempts: failed to initialize kubernetes middleware unable to poll pods during initialization: HyperError: error trying to connect: invalid dnsname
thread 'main' panicked at 'The agent could not access k8s api after several attempts: failed to initialize kubernetes middleware unable to poll pods during initialization: HyperError: error trying to connect: invalid dnsname', bin/src/main.rs:124:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
@jakedipity jakedipity self-assigned this Oct 15, 2021
@jakedipity jakedipity added the bug Something isn't working label Oct 15, 2021
@jakedipity
Copy link
Contributor

Hi @luong-komorebi,

This issue is caused because the current version of kube-rs tries to connect through an ip address instead of the recommended dns name: kube-rs/kube#587

We've already pushed out a fix for this and it's currently in master: 1accad4

Let me get back to you about when we expect to release this fix.

@luong-komorebi
Copy link
Author

Thanks for the quick response, @jakedipity
As we need some hot fixes to bring our logdn agent back as well, do you happen to know which docker image 3.x version doesnt have this bug? Or we have to go back to version 2

@jakedipity
Copy link
Contributor

I believe all the 3.x versions will have this issue. If version 2 works for you then I would recommend downgrading for the time being. We should be able to slice a release for this fix sometime next week.

Additionally, you're free to create your own image from the master branch: https://github.com/logdna/logdna-agent-v2#building-docker-image

@luong-komorebi
Copy link
Author

Appreciate your help @jakedipity. Look forward to resolving this issue soon.

Using a custom built image is fine but it does poses some troubles. For example, when I dissect the code for logdna-agent helm chart, it doesnt show the ability to customize appversion. Appversion is always set by the people who maintain the chart, and right now the number is sticky 3.3.1

Therefore, if I use my docker repo ( let's say mydocker/mylogdna with a tag patched-version ), it wont work. If I change patched-version to 3.3.1, I can say that it works for now. But if upstream Helm Repo change the version then it breaks since the version would now be 3.3.2 and I dont have that tag in my repo yet. And as #233 points out, right now it is very tricky to pinpoint a version of the chart, since that exact number is not widely released to the public.

In summary, I wish we have more long-term solution. Thanks for all the hard work, we will try to solve the problems on our side first, thus closing this issue

@c-nixon
Copy link
Collaborator

c-nixon commented Oct 19, 2021

@luong-komorebi We have released 3.3.2, it backports the dependency update mentioned about which should hopefully resolve the errors you ran into

@luong-komorebi
Copy link
Author

Thank you so much for the information, @c-nixon

Thanks everyone at Logdna for the support

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants