-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Description
I have incorporated the k8s python client library in a monitoring application , which checks for connections between the other pods of same app and exposes the metrics out to Prometheus Server to scrape .
Following is the function that does this
config.load_incluster_config()
v1 = client.CoreV1Api()
pods = v1.list_namespaced_pod("monitoring", label_selector='app=witness')
peer_connection_count = 0
random_pod_items = random.choices(pods.items, k=RANDOM_KEY)
for pod in random_pod_items:
pod_witness_endpoint = "http://" + pod.status.pod_ip + ":5000/metrics"
req = requests.get(pod_witness_endpoint, headers={'Connection': 'close'})
if req.status_code == 200:
peer_connection_count = peer_connection_count + 1
#deleting the in-memory config ( which is of huge size)
del pods
del v1
if peer_connection_count >= 1:
return 1
else:
return 0```
I have added explicit `del` calls to the objects to let the GC know to clean up un-referenced objects .
I have also explicitly specified the `Requests` and `Limits` in our Helm chart the limits for this monitoring application
resources:
limits:
memory: 128Mi
requests:
memory: 128Mi
The problem is the memory usage increases linearly and b reaches the limit range which cause the Pod to get killed by K8S stating `Out of Memory Exception` and the service pod has thousands of restarts in couple of hours .
Before using the Python K8S library following was the memory usage
```bash-3.2$ kubectl top pod -l app=witness -n monitoring
NAME CPU(cores) MEMORY(bytes)
witness-deamonset-2j9wn 1m 26Mi
witness-deamonset-4c772 1m 25Mi
witness-deamonset-4ltzp 2m 23Mi
witness-deamonset-524ch 3m 35Mi
witness-deamonset-57jlq 1m 37Mi
witness-deamonset-5zwz8 1m 29Mi
witness-deamonset-6q76n 1m 25Mi
witness-deamonset-b6bfq 1m 37Mi
witness-deamonset-cc4hx 1m 35Mi
witness-deamonset-f5v9l 4m 35Mi
witness-deamonset-g8cqq 1m 36Mi
witness-deamonset-gf4zd 1m 31Mi
witness-deamonset-h4wqw 1m 29Mi
witness-deamonset-lp2hw 2m 25Mi
witness-deamonset-mpzwf 1m 32Mi
witness-deamonset-mxhk4 1m 31Mi
witness-deamonset-q8j9j 1m 27Mi
witness-deamonset-r85fv 2m 26Mi
witness-deamonset-rwwsk 1m 37Mi
witness-deamonset-swq8b 3m 34Mi
witness-deamonset-t6q67 1m 37Mi
witness-deamonset-twn7z 1m 34Mi
witness-deamonset-v2lx6 1m 37Mi
witness-deamonset-xxrrc 1m 35Mi
witness-deamonset-z74jh 1m 35Mi
witness-deamonset-zm26p 1m 31Mi
After using the Python Client the memory usage pattern is
NAME CPU(cores) MEMORY(bytes)
witness-deamonset-24ttj 111m 90Mi
witness-deamonset-2wgjg 117m 88Mi
witness-deamonset-594bb 0m 0Mi
witness-deamonset-6jhrq 105m 83Mi
witness-deamonset-7nkv6 115m 89Mi
witness-deamonset-8mvw2 0m 0Mi
witness-deamonset-95nwf 0m 0Mi
witness-deamonset-brb7n 0m 0Mi
witness-deamonset-d7ctd 89m 87Mi
witness-deamonset-fd75t 113m 86Mi
witness-deamonset-gm8fj 125m 79Mi
witness-deamonset-gzxnk 106m 86Mi
witness-deamonset-hdhwp 119m 82Mi
witness-deamonset-hl7f6 435m 239Mi
witness-deamonset-hrzxz 122m 83Mi
witness-deamonset-kvsn2 163m 83Mi
witness-deamonset-l4tpn 99m 84Mi
witness-deamonset-lf82h 115m 82Mi
witness-deamonset-lsrpn 117m 83Mi
witness-deamonset-mmmw4 123m 84Mi
witness-deamonset-nfsjb 0m 0Mi
witness-deamonset-pgls2 286m 117Mi
witness-deamonset-qkpql 0m 0Mi
witness-deamonset-w67wv 0m 0Mi
witness-deamonset-wj4g6 93m 84Mi
witness-deamonset-wkxdg 0m 0Mi
witness-deamonset-zb9dv 347m 180Mi
Please help me with the issue
Metadata
Metadata
Assignees
Labels
lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.