Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Namespaces declared to be excluded are not. #26

Closed
larssb opened this issue May 5, 2024 · 0 comments
Closed

Namespaces declared to be excluded are not. #26

larssb opened this issue May 5, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@larssb
Copy link

larssb commented May 5, 2024

Issue

I declare:

excludedNamespaces:
  - backup
  - certificates
  - a
  - b
  - c
  - d
  - kube-system
  - longhorn-system
  - network
  - scaling #! We don't want to scale down the kube-downscaler itself
  - e
  - f
  - g

In the ConfigMap that the kube-downscaler Helm Chart templates and deploys I see:

apiVersion: v1
data:
  DEFAULT_UPTIME: Mon-Fri 06:30-17:30 Europe/Copenhagen
  EXCLUDE_NAMESPACES: backup, certificates, a, b, c,
    kube-system, longhorn-system, network, scaling, e, f, g
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: kube-downscaler
    meta.helm.sh/release-namespace: scaling
  creationTimestamp: "2024-04-23T14:30:45Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: py-kube-downscaler
  namespace: scaling
  resourceVersion: "907348252"
  uid: 92bcbe51-83bf-4518-a53d-1d375e44f6a0

However, all of the namespaces are NOT excluded. Confirmed be simply seeing that Pod's are scaled out in the different to be excluded namespaces as well as e.g. these logline in the py-kube-downscaler.

2024-05-05 20:09:24,499 DEBUG: Deployment network/cilium-operator has 2 replicas (original: None, uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen)                                                                                         │
│ 2024-05-05 20:09:24,499 INFO: Scaling down Deployment network/cilium-operator from 2 to 0 replicas (uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen, downtime: never)                                                                      │
│ 2024-05-05 20:09:24,551 DEBUG: https://10.43.0.1:443 "PATCH /apis/apps/v1/namespaces/network/deployments/cilium-operator HTTP/1.1" 200 None                                                                                              │
│ 2024-05-05 20:09:24,553 DEBUG: Deployment network/hubble-relay has 1 replicas (original: None, uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen)                                                                                            │
│ 2024-05-05 20:09:24,553 INFO: Scaling down Deployment network/hubble-relay from 1 to 0 replicas (uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen, downtime: never)                                                                         │
│ 2024-05-05 20:09:24,620 DEBUG: https://10.43.0.1:443 "PATCH /apis/apps/v1/namespaces/network/deployments/hubble-relay HTTP/1.1" 200 None                                                                                                 │
│ 2024-05-05 20:09:24,622 DEBUG: Deployment network/hubble-ui has 1 replicas (original: None, uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen)                                                                                               │
│ 2024-05-05 20:09:24,622 INFO: Scaling down Deployment network/hubble-ui from 1 to 0 replicas (uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen, downtime: never)                                                                            │
│ 2024-05-05 20:09:24,675 DEBUG: https://10.43.0.1:443 "PATCH /apis/apps/v1/namespaces/network/deployments/hubble-ui HTTP/1.1" 200 None                                                                                                    │
│ 2024-05-05 20:09:24,677 DEBUG: Deployment network/k8s-gateway has 2 replicas (original: None, uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen)                                                                                             │
│ 2024-05-05 20:09:24,677 INFO: Scaling down Deployment network/k8s-gateway from 2 to 0 replicas (uptime: Mon-Fri 06:30-17:30 Europe/Copenhagen, downtime: never)                                                                          │
│ 2024-05-05 20:09:24,737 DEBUG: https://10.43.0.1:443 "PATCH /apis/apps/v1/namespaces/network/deployments/k8s-gateway HTTP/1.1" 200 None

Problem to solve

To ensure that ALL namespaces excluded via the excludedNamespaces YAML Helm Value are actually excluded.

@larssb larssb changed the title Namespaces declared to be excluded are not. When white Namespaces declared to be excluded are not. May 5, 2024
larssb added a commit to larssb/py-kube-downscaler that referenced this issue May 5, 2024
@Fovty Fovty closed this as completed May 7, 2024
@JTaeuber JTaeuber added the bug Something isn't working label Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants