Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Horizontal Pod Autoscaler With Custom Metric Has "Targets%" Column Showing "<unknown>/<unkown>" Despite Having Metrics #1652

Closed
eddieparker opened this issue Jul 11, 2022 · 12 comments
Labels
duplicate This issue or pull request already exists

Comments

@eddieparker
Copy link

I have a set of horizontal pod autoscalers (HPA) driven off of a custom metric server. If I use kubectl get hpa I get:

C:\Users\eddie>kubectl get hpa
NAME                              REFERENCE                                       TARGETS      MINPODS   MAXPODS   REPLICAS   AGE
my-project-worker1-hpa         Deployment/my-project-worker1-worker           8823020m/10k     50        600       50         2d22h
my-project-worker2-hpa         Deployment/my-project-worker2-worker         671784290m/10k     400       600       600        2d22h

If I use k9s I get "/%" [sic]. Is there any way to get this to show the actual target data?

@slimus
Copy link
Collaborator

slimus commented Jul 12, 2022

@eddieparker hi! Could you please share more information about your env (k9s version, k8s version, cluster version, etc.) Thanks!

@jotasixto
Copy link

I have the same problem and I think this is the same problem of this issue(#1617)

@slimus
Copy link
Collaborator

slimus commented Jul 13, 2022

Hi @jotasixto! Can you please show us your hpa manifest? Thanks!

@slimus
Copy link
Collaborator

slimus commented Jul 13, 2022

@eddieparker it would be nice if you share kubectl edit hpa <hpa-name> with us. Thanks!

@jotasixto
Copy link

Hi @jotasixto! Can you please show us your hpa manifest? Thanks!

This is our definition of a hpa:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/behavior: '{"ScaleUp":{"StabilizationWindowSeconds":45,"SelectPolicy":"Max","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":15},{"Type":"Percent","Value":100,"PeriodSeconds":15}]},"ScaleDown":{"StabilizationWindowSeconds":3600,"SelectPolicy":"Min","Policies":[{"Type":"Percent","Value":100,"PeriodSeconds":15},{"Type":"Pods","Value":1,"PeriodSeconds":300}]}}'
    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2022-07-08T12:57:30Z","reason":"ReadyForNewScale","message":"recommended
      size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2022-07-13T15:15:39Z","reason":"ValidMetricFound","message":"the
      HPA was able to successfully calculate a replica count from external metric
      datadogmetric@play-default:play-apigateway-apigateway-cpu-usage(nil)"},{"type":"ScalingLimited","status":"True","lastTransitionTime":"2022-07-13T16:23:05Z","reason":"TooFewReplicas","message":"the
      desired replica count is less than the minimum replica count"}]'
    autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"External","external":{"metricName":"datadogmetric@play-default:play-apigateway-apigateway-cpu-usage","currentValue":"31m"}}]'
    autoscaling.alpha.kubernetes.io/metrics: '[{"type":"External","external":{"metricName":"datadogmetric@play-default:play-apigateway-apigateway-cpu-usage","targetValue":"600m"}}]'
    meta.helm.sh/release-name: play
    meta.helm.sh/release-namespace: play-default
  creationTimestamp: "2022-07-08T12:57:15Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: play-apigateway-apigateway
  namespace: play-default
  resourceVersion: "57005064"
  uid: 4a78d180-a7c4-4725-8671-502e644b8be3
spec:
  maxReplicas: 6
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: play-apigateway-apigateway
status:
  currentReplicas: 2
  desiredReplicas: 2

Thank you very much for your help!!!

@eddieparker
Copy link
Author

I'm unavailable for a few days but I will update this stuff my info if the other commenters help isn't sufficient. Apologies for the delay and thanks for looking into it.

@slimus
Copy link
Collaborator

slimus commented Jul 14, 2022

@jotasixto thank you! It's very helpful. And one more question, could you please show the target column from kubectl -n play-apigateway-apigateway get hpa/play-apigateway-apigateway.
Why I'm asking it? I have a different output for edit command and I don't understand what we should show here. Thank you for your cooperation.

@jotasixto
Copy link

jotasixto commented Jul 14, 2022

@jotasixto thank you! It's very helpful. And one more question, could you please show the target column from kubectl -n play-apigateway-apigateway get hpa/play-apigateway-apigateway. Why I'm asking it? I have a different output for edit command and I don't understand what we should show here. Thank you for your cooperation.

The result of the requested command:

❯ kubectl -n play-default get hpa/play-apigateway-apigateway
NAME                                 REFERENCE                               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
play-apigateway-apigateway   Deployment/play-apigateway-apigateway           39m/600m   2         6         2          6d3h

I hope this helps 🙏 !

@ubcharron
Copy link

I have the same problem, and I noticed that kubernetes is lying when outputting the yaml. I don't know if it impacts the way k9s calculates the TARGETS%? My HPA is using "apiVersion: autoscaling/v2beta2", but kubectl get -o yaml and k9s display "apiVersion: autoscaling/v1". Notably, the output yaml is missing the "metrics" section which use "AverageValue" rather than "Utilization". I have another HPA using "resource.target.type: Utilization" and that one displays the proper TARGET% in k9s.

(kubernetes client v1.20.2, server v1.20.11)

My yaml:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: hpatest
spec:
  behavior:
    scaleDown:
      policies:
        - type: Pods
          value: 1
          periodSeconds: 120
      selectPolicy: Min
      stabilizationWindowSeconds: 300
    scaleUp:
      policies:
        - type: Pods
          value: 1
          periodSeconds: 120
      selectPolicy: Min
      stabilizationWindowSeconds: 300
  minReplicas: 1
  maxReplicas: 3
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: AverageValue
          averageValue: 1800m
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: cpustress

The output from kubectl get hpa/hpatest -o yaml (minus "managedFields")

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/behavior: '{"ScaleUp":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":1,"PeriodSeconds":120}]},"ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":1,"PeriodSeconds":120}]}}'
    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2022-07-14T20:05:57Z","reason":"ReadyForNewScale","message":"recommended
      size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2022-07-14T22:23:09Z","reason":"ValidMetricFound","message":"the
      HPA was able to successfully calculate a replica count from cpu resource"},{"type":"ScalingLimited","status":"True","lastTransitionTime":"2022-07-15T02:56:09Z","reason":"TooManyReplicas","message":"the
      desired replica count is more than the maximum replica count"}]'
    autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"Resource","resource":{"name":"cpu","currentAverageValue":"2"}}]'
    autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Resource","resource":{"name":"cpu","targetAverageValue":"1800m"}}]'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"hpatest","namespace":"hpatest"},"spec":{"behavior":{"scaleDown":{"policies":[{"periodSeconds":120,"type":"Pods","value":1}],"selectPolicy":"Min","stabilizationWindowSeconds":300},"scaleUp":{"policies":[{"periodSeconds":120,"type":"Pods","value":1}],"selectPolicy":"Min","stabilizationWindowSeconds":300}},"maxReplicas":3,"metrics":[{"resource":{"name":"cpu","target":{"averageValue":"1800m","type":"AverageValue"}},"type":"Resource"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"cpustress"}}}
  creationTimestamp: "2022-07-14T20:05:42Z"
  name: hpatest
  namespace: hpatest
  resourceVersion: "105305232"
  uid: 246ad2fc-e4c7-4c5d-a4fd-9e42910ee78d
spec:
  maxReplicas: 3
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: cpustress
status:
  currentReplicas: 3
  desiredReplicas: 3
  lastScaleTime: "2022-07-15T02:50:00Z"

@derailed derailed added the duplicate This issue or pull request already exists label Jul 18, 2022
@derailed
Copy link
Owner

@ubcharron Thank you for the details! I think k8s uses the preferred version which should be v1 hence the report yaml from kubectl. I believe this is a dup of #1617 as @jotasixto (Thank you Juan!) pointed at.
Let's see if we're happier on v0.26.0. Closing as dup #1617

@jotasixto
Copy link

@derailed Thank you very much for the solution!
With version v0.26.0 it works perfectly 👏

@eddieparker
Copy link
Author

Thank you so much for doing this while I was away. Works perfectly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

5 participants