Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect findings for HorizontalPodAutoscaler #216

Closed
3 tasks done
sybo-matthew opened this issue Apr 5, 2023 · 6 comments · Fixed by #245
Closed
3 tasks done

Incorrect findings for HorizontalPodAutoscaler #216

sybo-matthew opened this issue Apr 5, 2023 · 6 comments · Fixed by #245
Assignees

Comments

@sybo-matthew
Copy link

sybo-matthew commented Apr 5, 2023

Checklist:

  • I've searched for similar issues and couldn't find anything matching
  • I've included steps to reproduce the bug.
  • I've included the version of Kubernetes and k8sgpt.

Subject of the issue

k8sgpt returns incorrect information reporting missing information about HorizontalPodAutoscaler. Specifically: - Error: HorizontalPodAutoscaler uses Deployment/teams as ScaleTargetRef which does not exist.

Your environment

  • Version of Kubernetes
    Client Version: v1.24.12-dispatcher
    Kustomize Version: v4.5.4
    Server Version: v1.24.9-gke.3200
  • MacOS 13.2.1
  • Version of k8sgpt
    k8sgpt version 0.1.8

Steps to reproduce

  • Run k8sgpt analyze --explain
  • Receive output of: - Error: HorizontalPodAutoscaler uses Deployment/teams as ScaleTargetRef which does not exist.
  • Confirm deployment does exist in namespace.
kubectl get deployment teams -n other-namespace
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
teams   2/2     2            2           144d

  • Confirm autoscaling is working
Name:                                                  teams
Namespace:                                          other-namespace
Annotations:                                           <none>
CreationTimestamp:                              Fri, 11 Nov 2022 11:07:40 +0100
Reference:                                             Deployment/teams
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  1% (2m) / 80%
Min replicas:                                          2
Max replicas:                                          150
Deployment pods:                                       2 current / 2 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  recommended size matches current size
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  True    TooFewReplicas    the desired replica count is less than the minimum replica count
Events:           <none>

Expected behaviour

k8sgpt should not be trying to check the HPA at all.

❯ k8sgpt filters list
Active:
> Pod
> ReplicaSet
> PersistentVolumeClaim
> Service
> Ingress
Unused:
> HorizontalPodAutoScaler
> PodDisruptionBudget

Actual behaviour

It seems to be incorrectly checking for HPA deployment targets in the default namespace.

Additional context / screenshots

@matthisholleville
Copy link
Contributor

matthisholleville commented Apr 5, 2023

Hello,

Thank you for your issue. I Will check.

@matthisholleville
Copy link
Contributor

I just tried it and I couldn't reproduce it. Are your deployment and HPA in the same namespace?

@sybo-matthew
Copy link
Author

Yeah if you see the output from the HPA it’s in the same namespace as the service.

@matthisholleville
Copy link
Contributor

Can you please send me the ApiVersions that you are using for HPA and Deployment?

@larssb
Copy link

larssb commented Apr 9, 2023

I'm also seeing this:

HPA API version: autoscalign/v2
Deployment: apps/v1

I'm talking about this issue in #229

@matthisholleville
Copy link
Contributor

Thank you for reporting the issue. I have just pushed a fix that resolves the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants