You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use below query to count the logs of a pod and we find that when pod has massive amount of logs, Azure Monitor may fail to collect the logs.
let containerId = KubePodInventory
| where ContainerStatus == "running"
| summarize by ContainerID, Name
| where Name startswith "xxx"
| project ContainerID;
let startDateTime = datetime('2021-04-16T00:01:00.000Z');
let endDateTime = datetime('2021-04-16T06:40:00.000Z');
ContainerLog
| where ContainerID in (containerId)
| where TimeGenerated >= startDateTime and TimeGenerated < endDateTime
| summarize count() by bin(TimeGenerated, 1m)
| order by TimeGenerated asc
As shown in the screenshot, there're 5 mins blank window.
We use "kubectl logs pod" to export the pod logs and confirm that at above time the pod was generating logs. It's just that Azure Monitor failed to collect it.
The text was updated successfully, but these errors were encountered:
Based on memory and CPU we have, we will only be able to collect limited amount of the logs. The solution is spread-out the pods which are generating high volume of logs to different nodes to spread-out the load.
Hi Experts,
We use below query to count the logs of a pod and we find that when pod has massive amount of logs, Azure Monitor may fail to collect the logs.
let containerId = KubePodInventory
| where ContainerStatus == "running"
| summarize by ContainerID, Name
| where Name startswith "xxx"
| project ContainerID;
let startDateTime = datetime('2021-04-16T00:01:00.000Z');
let endDateTime = datetime('2021-04-16T06:40:00.000Z');
ContainerLog
| where ContainerID in (containerId)
| where TimeGenerated >= startDateTime and TimeGenerated < endDateTime
| summarize count() by bin(TimeGenerated, 1m)
| order by TimeGenerated asc
As shown in the screenshot, there're 5 mins blank window.
We use "kubectl logs pod" to export the pod logs and confirm that at above time the pod was generating logs. It's just that Azure Monitor failed to collect it.
The text was updated successfully, but these errors were encountered: