Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEDA Metrics API Server not honoring Loglevels #2316

Closed
alexkunde opened this issue Nov 23, 2021 · 9 comments · Fixed by #3063
Closed

KEDA Metrics API Server not honoring Loglevels #2316

alexkunde opened this issue Nov 23, 2021 · 9 comments · Fixed by #3063
Labels
bug Something isn't working

Comments

@alexkunde
Copy link

Report

What is happening?

Kede Matrics Api Server is spamming trace logs, even though logging is set to v=0

Config

Containers:
keda-metrics-apiserver:
  Image:         <reducted>/kedacore/keda-metrics-apiserver:2.4.0
  Ports:         6443/TCP, 8080/TCP
  Host Ports:    0/TCP, 0/TCP
  Args: 
    /usr/local/bin/keda-adapter
    --secure-port=6443
    --logtostderr=true
    --v=0

Logs

 I1123 11:36:52.772270       1 trace.go:205] Trace[1110288377]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd071/http-http---ems-dd071-8080-topic-dd071-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:36:52.195) (total time: 576ms):                      │
│ Trace[1110288377]: ---"Listing from storage done" 576ms (11:36:00.772)                                                                                                                                           │
│ Trace[1110288377]: [576.952177ms] [576.952177ms] END                                                                                                                                                             │
│ I1123 11:37:05.443555       1 trace.go:205] Trace[1865666574]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd075/http-http---ems-dd075-8080-topic-dd075-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:37:04.722) (total time: 720ms):                      │
│ Trace[1865666574]: ---"Listing from storage done" 720ms (11:37:00.443)                                                                                                                                           │
│ Trace[1865666574]: [720.996084ms] [720.996084ms] END                                                                                                                                                             │
│ I1123 11:37:54.733078       1 trace.go:205] Trace[751634469]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/df062/http-http---ems-df062-8080-topic-df062-hsdg->-pendingmessages,user-agent:kube-co │
│ ntroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:37:54.068) (total time: 664ms):                       │
│ Trace[751634469]: ---"Listing from storage done" 664ms (11:37:00.732)                                                                                                                                            │
│ Trace[751634469]: [664.69468ms] [664.69468ms] END                                                                                                                                                                │
│ I1123 11:38:44.188531       1 trace.go:205] Trace[1425914106]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/df063/http-http---ems-df063-8080-topic-df063-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:38:43.270) (total time: 917ms):                      │
│ Trace[1425914106]: ---"Listing from storage done" 917ms (11:38:00.188)                                                                                                                                           │
│ Trace[1425914106]: [917.53243ms] [917.53243ms] END                                                                                                                                                               │
│ E1123 11:40:00.568239       1 writers.go:107] apiserver was unable to write a JSON response: http2: stream closed                                                                                                │
│ E1123 11:40:00.568301       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}                                                              │
│ E1123 11:40:00.570612       1 writers.go:120] apiserver was unable to write a fallback JSON response: http2: stream closed                                                                                       │
│ I1123 11:40:13.081251       1 trace.go:205] Trace[1553100881]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd065/http-http---ems-dd065-8080-topic-dd065-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:40:12.543) (total time: 537ms):                      │
│ Trace[1553100881]: ---"Listing from storage done" 537ms (11:40:00.081)                                                                                                                                           │
│ Trace[1553100881]: [537.541397ms] [537.541397ms] END                                                                                                                                                             │
│ I1123 11:40:14.319367       1 trace.go:205] Trace[228151876]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd001/http-http---ems-dd001-8080-topic-dd001-hsdg->-pendingmessages,user-agent:kube-co │
│ ntroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:40:13.742) (total time: 576ms):                       │
│ Trace[228151876]: ---"Listing from storage done" 576ms (11:40:00.319)                                                                                                                                            │
│ Trace[228151876]: [576.337463ms] [576.337463ms] END                                                                                                                                                              │
│ I1123 11:40:56.667583       1 trace.go:205] Trace[398180705]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd065/postgresql-retrigger,user-agent:kube-controller-manager/v1.20.9 (linux/amd64) ku │
│ bernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:40:55.831) (total time: 836ms):                                                                │
│ Trace[398180705]: ---"Listing from storage done" 836ms (11:40:00.667)                                                                                                                                            │
│ Trace[398180705]: [836.281531ms] [836.281531ms] END

Expected Behavior

When setting loglevel to 0, only level 0 and below should be logged
( as per log documentation)

Actual Behavior

Traces are logged

Steps to Reproduce the Problem

  1. Start Keda Metrics Server
  2. Let it run for a while
  3. See logs

Logs from KEDA operator

I1123 11:36:52.772270       1 trace.go:205] Trace[1110288377]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd071/http-http---ems-dd071-8080-topic-dd071-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:36:52.195) (total time: 576ms):                      │
│ Trace[1110288377]: ---"Listing from storage done" 576ms (11:36:00.772)                                                                                                                                           │
│ Trace[1110288377]: [576.952177ms] [576.952177ms] END                                                                                                                                                             │
│ I1123 11:37:05.443555       1 trace.go:205] Trace[1865666574]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd075/http-http---ems-dd075-8080-topic-dd075-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:37:04.722) (total time: 720ms):                      │
│ Trace[1865666574]: ---"Listing from storage done" 720ms (11:37:00.443)                                                                                                                                           │
│ Trace[1865666574]: [720.996084ms] [720.996084ms] END                                                                                                                                                             │
│ I1123 11:37:54.733078       1 trace.go:205] Trace[751634469]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/df062/http-http---ems-df062-8080-topic-df062-hsdg->-pendingmessages,user-agent:kube-co │
│ ntroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:37:54.068) (total time: 664ms):                       │
│ Trace[751634469]: ---"Listing from storage done" 664ms (11:37:00.732)                                                                                                                                            │
│ Trace[751634469]: [664.69468ms] [664.69468ms] END                                                                                                                                                                │
│ I1123 11:38:44.188531       1 trace.go:205] Trace[1425914106]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/df063/http-http---ems-df063-8080-topic-df063-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:38:43.270) (total time: 917ms):                      │
│ Trace[1425914106]: ---"Listing from storage done" 917ms (11:38:00.188)                                                                                                                                           │
│ Trace[1425914106]: [917.53243ms] [917.53243ms] END                                                                                                                                                               │
│ E1123 11:40:00.568239       1 writers.go:107] apiserver was unable to write a JSON response: http2: stream closed                                                                                                │
│ E1123 11:40:00.568301       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}                                                              │
│ E1123 11:40:00.570612       1 writers.go:120] apiserver was unable to write a fallback JSON response: http2: stream closed                                                                                       │
│ I1123 11:40:13.081251       1 trace.go:205] Trace[1553100881]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd065/http-http---ems-dd065-8080-topic-dd065-hsdg->-pendingmessages,user-agent:kube-c │
│ ontroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:40:12.543) (total time: 537ms):                      │
│ Trace[1553100881]: ---"Listing from storage done" 537ms (11:40:00.081)                                                                                                                                           │
│ Trace[1553100881]: [537.541397ms] [537.541397ms] END                                                                                                                                                             │
│ I1123 11:40:14.319367       1 trace.go:205] Trace[228151876]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd001/http-http---ems-dd001-8080-topic-dd001-hsdg->-pendingmessages,user-agent:kube-co │
│ ntroller-manager/v1.20.9 (linux/amd64) kubernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:40:13.742) (total time: 576ms):                       │
│ Trace[228151876]: ---"Listing from storage done" 576ms (11:40:00.319)                                                                                                                                            │
│ Trace[228151876]: [576.337463ms] [576.337463ms] END                                                                                                                                                              │
│ I1123 11:40:56.667583       1 trace.go:205] Trace[398180705]: "List" url:/apis/external.metrics.k8s.io/v1beta1/namespaces/dd065/postgresql-retrigger,user-agent:kube-controller-manager/v1.20.9 (linux/amd64) ku │
│ bernetes/454b5b5/system:serviceaccount:kube-system:horizontal-pod-autoscaler,client:172.31.31.244 (23-Nov-2021 11:40:55.831) (total time: 836ms):                                                                │
│ Trace[398180705]: ---"Listing from storage done" 836ms (11:40:00.667)                                                                                                                                            │
│ Trace[398180705]: [836.281531ms] [836.281531ms] END

KEDA Version

2.4.0

Kubernetes Version

1.20

Platform

Microsoft Azure

Scaler Details

Prometheus Scaler / Metrics API Scaler

Anything else?

No response

@alexkunde alexkunde added the bug Something isn't working label Nov 23, 2021
@zroubalik
Copy link
Member

Could you please confirm the same problem on 2.5.0?

@alexkunde
Copy link
Author

Yes, it's still present in 2.5.0

I did a little bit of digging and the traced line seems to come from the kubernetes-apiserver.
https://github.com/kubernetes/apiserver/blob/8280ed5606745dad4c442ac50cd757429f2aed87/pkg/endpoints/handlers/get.go#L283

I haven't found the section where this function is executed in Keda, but it seems to return the trace on v(0).Info level.
I only found two v(0)'s. One is for kafka scaler and the other one is this line

logger.V(0).Info("Received request for custom metric", "groupresource", info.GroupResource.String(), "namespace", namespace, "metric name", info.Metric, "selector", selector.String())

Maybe that helps somehow :octocat:

@stale
Copy link

stale bot commented Feb 8, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Feb 8, 2022
@tomkerkhove tomkerkhove moved this to Backlog in Roadmap - KEDA Core Feb 10, 2022
@stale
Copy link

stale bot commented Feb 15, 2022

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Feb 15, 2022
Repository owner moved this from To Do to Ready To Ship in Roadmap - KEDA Core Feb 15, 2022
@felixb
Copy link

felixb commented Feb 24, 2022

this is still an issue, even in keda 2.6.1.

@prousso
Copy link

prousso commented May 18, 2022

Still on 2.7.1

@tomkerkhove tomkerkhove reopened this May 23, 2022
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label May 23, 2022
Repository owner moved this from Ready To Ship to Proposed in Roadmap - KEDA Core May 23, 2022
@tomkerkhove tomkerkhove changed the title Keda Metrics API Server not honoring Loglevels KEDA Metrics API Server not honoring Loglevels May 23, 2022
@tomkerkhove tomkerkhove moved this from Proposed to To Do in Roadmap - KEDA Core May 23, 2022
@JorTurFer
Copy link
Member

I have been checking klog docs and there they tell that we MUST call flag.Parse before any logging is done, but checking our implementation I can see that we print the version before it. Could we be facing with any internal race condition?
WDYT @zroubalik ?

@zroubalik
Copy link
Member

I have been checking klog docs and there they tell that we MUST call flag.Parse before any logging is done, but checking our implementation I can see that we print the version before it. Could we be facing with any internal race condition? WDYT @zroubalik ?

This might be the case, hard to tell 🤷‍♂️

Have you been able to test your propsed fix? To see if there aren't these messages anymore?

@JorTurFer
Copy link
Member

I have checked e2e test output to be sure, but let me deploy it into a testing cluster for some days, just to double-check 😄

Repository owner moved this from To Do to Ready To Ship in Roadmap - KEDA Core May 28, 2022
@tomkerkhove tomkerkhove moved this from Ready To Ship to Done in Roadmap - KEDA Core Aug 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

6 participants