-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Horizontal Pod Autoscaler With Custom Metric Has "Targets%" Column Showing "<unknown>/<unkown>" Despite Having Metrics #1652
Comments
@eddieparker hi! Could you please share more information about your env (k9s version, k8s version, cluster version, etc.) Thanks! |
I have the same problem and I think this is the same problem of this issue(#1617) |
Hi @jotasixto! Can you please show us your hpa manifest? Thanks! |
@eddieparker it would be nice if you share |
This is our definition of a hpa:
Thank you very much for your help!!! |
I'm unavailable for a few days but I will update this stuff my info if the other commenters help isn't sufficient. Apologies for the delay and thanks for looking into it. |
@jotasixto thank you! It's very helpful. And one more question, could you please show the target column from |
The result of the requested command:
I hope this helps 🙏 ! |
I have the same problem, and I noticed that kubernetes is lying when outputting the yaml. I don't know if it impacts the way k9s calculates the TARGETS%? My HPA is using "apiVersion: autoscaling/v2beta2", but (kubernetes client v1.20.2, server v1.20.11) My yaml: apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: hpatest
spec:
behavior:
scaleDown:
policies:
- type: Pods
value: 1
periodSeconds: 120
selectPolicy: Min
stabilizationWindowSeconds: 300
scaleUp:
policies:
- type: Pods
value: 1
periodSeconds: 120
selectPolicy: Min
stabilizationWindowSeconds: 300
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: AverageValue
averageValue: 1800m
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cpustress The output from apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
annotations:
autoscaling.alpha.kubernetes.io/behavior: '{"ScaleUp":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":1,"PeriodSeconds":120}]},"ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":1,"PeriodSeconds":120}]}}'
autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2022-07-14T20:05:57Z","reason":"ReadyForNewScale","message":"recommended
size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2022-07-14T22:23:09Z","reason":"ValidMetricFound","message":"the
HPA was able to successfully calculate a replica count from cpu resource"},{"type":"ScalingLimited","status":"True","lastTransitionTime":"2022-07-15T02:56:09Z","reason":"TooManyReplicas","message":"the
desired replica count is more than the maximum replica count"}]'
autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"Resource","resource":{"name":"cpu","currentAverageValue":"2"}}]'
autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Resource","resource":{"name":"cpu","targetAverageValue":"1800m"}}]'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"hpatest","namespace":"hpatest"},"spec":{"behavior":{"scaleDown":{"policies":[{"periodSeconds":120,"type":"Pods","value":1}],"selectPolicy":"Min","stabilizationWindowSeconds":300},"scaleUp":{"policies":[{"periodSeconds":120,"type":"Pods","value":1}],"selectPolicy":"Min","stabilizationWindowSeconds":300}},"maxReplicas":3,"metrics":[{"resource":{"name":"cpu","target":{"averageValue":"1800m","type":"AverageValue"}},"type":"Resource"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"cpustress"}}}
creationTimestamp: "2022-07-14T20:05:42Z"
name: hpatest
namespace: hpatest
resourceVersion: "105305232"
uid: 246ad2fc-e4c7-4c5d-a4fd-9e42910ee78d
spec:
maxReplicas: 3
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cpustress
status:
currentReplicas: 3
desiredReplicas: 3
lastScaleTime: "2022-07-15T02:50:00Z" |
@ubcharron Thank you for the details! I think k8s uses the preferred version which should be v1 hence the report yaml from kubectl. I believe this is a dup of #1617 as @jotasixto (Thank you Juan!) pointed at. |
@derailed Thank you very much for the solution! |
Thank you so much for doing this while I was away. Works perfectly! |
I have a set of horizontal pod autoscalers (HPA) driven off of a custom metric server. If I use kubectl get hpa I get:
If I use k9s I get "/%" [sic]. Is there any way to get this to show the actual target data?
The text was updated successfully, but these errors were encountered: