Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Logging format to be changed #965

Closed
gallowaystorm opened this issue Sep 27, 2022 · 24 comments
Closed

Allow Logging format to be changed #965

gallowaystorm opened this issue Sep 27, 2022 · 24 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gallowaystorm
Copy link
Contributor

Is your feature request related to a problem? Please describe.

Yes it is related to a problem. When sending the logs to Datadog, all logs show as errors because I cannot switch them to a format that Datadog excepts. This means that all logs are errors even if they are just info level logs.

Describe the solution you'd like

The ability to change the format of the logs.

Describe alternatives you've considered

none

What version of descheduler are you using?

descheduler version: Helm 0.24.1

Additional context

@gallowaystorm gallowaystorm added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 27, 2022
@damemi
Copy link
Contributor

damemi commented Sep 27, 2022

Hi @gallowaystorm, is the --logging-format flag possibly helpful? That supports text and json logging output.

@gallowaystorm
Copy link
Contributor Author

How can I use that with the Helm chart?

@harshanarayana
Copy link
Contributor

harshanarayana commented Sep 28, 2022

helm install ...... --set cmdOptions.logging-format=json

@gallowaystorm This should get you going

❯ helm install test . --dry-run --set cmdOptions.logging-format=json | grep -A2 logging
                - "--logging-format"
                - "json"
                - "--v"

@gallowaystorm
Copy link
Contributor Author

Thank you I will try this!

@gallowaystorm
Copy link
Contributor Author

@harshanarayana what is the default setting for this

@gallowaystorm
Copy link
Contributor Author

Also that did not help the issue in Datadog. It did change the log format however

@gallowaystorm gallowaystorm reopened this Sep 29, 2022
@damemi
Copy link
Contributor

damemi commented Sep 29, 2022

@gallowaystorm could you give more details on what the issue in Datadog is? Some sample output or screenshots would help, I don't know if they are looking for a specific format or what that format you need is

@gallowaystorm
Copy link
Contributor Author

gallowaystorm commented Sep 29, 2022

Sure here is the picture of our DD when on the "text" format:

descheduler

Here is it when I switched to JSON format:

descheduler-json

DD usually expects JSON, however it seems you aren't sending the log level on the JSON format. They are all coming in as errors on DD because of this

@damemi
Copy link
Contributor

damemi commented Sep 29, 2022

We just use the klog library to set our logs, which should also handle the json formatting, so I'm not sure why it wouldn't be passing the severity.

Could you try setting --skip-headers as well? I found this issue and it looks like maybe the klog headers are in your logs... still don't see the severity in the JSON format, but this could be a start

@harshanarayana
Copy link
Contributor

harshanarayana commented Sep 29, 2022

@damemi Might be related to the discussion on #811 ? At least the stderr part seem related.

@damemi
Copy link
Contributor

damemi commented Oct 3, 2022

@harshanarayana ah you may be right, I bumped that issue and will try to figure out what still needs to be done for it

@gallowaystorm
Copy link
Contributor Author

gallowaystorm commented Oct 4, 2022

@damemi I tried to set the skip headers via our helm release in Terraform (how we do helm releases) and it does not seem possible to do so:

resource "helm_release" "descheduler" {
  chart      = local.service_name
  name       = "${local.service_name}-${var.environment}"
  namespace  = local.namespace
  repository = "https://kubernetes-sigs.github.io/descheduler/"
  version    = "0.24.1" # APP VERSION v0.24.1

  set {
    name  = "cmdOptions.skip-headers"
    value = true
  }
}

Log from DD:

Error: unknown command "true" for "descheduler"

https://github.com/kubernetes-sigs/descheduler/blob/0317be1b7672c511f5e2e53acaea1dd5cb74fd77/docs/user-guide.md#cli-options

Seems like that flag does not allow a Boolean

@damemi
Copy link
Contributor

damemi commented Oct 7, 2022

@gallowaystorm I think the problem is actually closer to #811 like @harshanarayana mentioned. But it's worth unsticking this just to see

Running locally, the following commands work:

$ descheduler --skip-headers=true
$ descheduler --skip-headers

but this one gives the unknown command error:

$ descheduler --skip-headers true

since we don't add a = when we parse cmdOptions can you try just setting an empty value "" for this? ie

  set {
    name  = "cmdOptions.skip-headers"
    value = ""
  }

@gallowaystorm
Copy link
Contributor Author

That did not make the logging error any better, however it did make the logs easier to read... I then tried the logtostderr method with:

resource "helm_release" "descheduler" {
  chart      = local.service_name
  name       = "${local.service_name}-${var.environment}"
  namespace  = local.namespace
  repository = "https://kubernetes-sigs.github.io/descheduler/"
  version    = "0.24.1" # APP VERSION v0.24.1

  set {
    name  = "cmdOptions.logtostderr"
    value = "false"
  }
}

This still seemed to not do the trick...

@gallowaystorm
Copy link
Contributor Author

Any updates on this?

@damemi
Copy link
Contributor

damemi commented Oct 17, 2022

Hi @gallowaystorm, thanks for bumping this. This issue is in our usage of logging libraries from upstream Kubernetes and their default output streams.

I dug around some more and found this issue which also had to do with DataDog integration: #676

It sounds like #676 (comment) describes a datadog workaround to redirect logs to the right log level based on parsing the lines. This might un stick you until we fix our own implementation.

@damemi
Copy link
Contributor

damemi commented Oct 17, 2022

@gallowaystorm have you tried a combination of logtostderr=false and logging-format=json?

@gallowaystorm
Copy link
Contributor Author

Yes I did try that @damemi. Let me look at #676 and see whats happening there

@gallowaystorm
Copy link
Contributor Author

That workaround in #676 did the trick for our needs. However, because of the workaround depending on the headers being on the logs, we can now no longer remove the headers. Will this be fixed in any future releases?

@damemi
Copy link
Contributor

damemi commented Oct 19, 2022

@gallowaystorm I'm glad that worked for you. Yes, there appears to be some issue in our logging as others have also reported issues with logs so we will be discussing this at our next project meeting

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 17, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 18, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants