-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option for dropping the log body if log was parsed #551
Comments
This issue or PR has been automatically marked as stale due to the lack of recent activity. This bot triages issues and PRs according to the following rules:
You can:
If you think that I work incorrectly, kindly raise an issue with the problem. /lifecycle stale |
This issue or PR has been automatically marked as stale due to the lack of recent activity. This bot triages issues and PRs according to the following rules:
You can:
If you think that I work incorrectly, kindly raise an issue with the problem. /lifecycle stale |
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. |
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. |
Decisions: The new The new attribute moved from the |
Description
A LogPipeline tries to parse a log payload if the payload is in a structured JSON format and then enriches the log record with the parsed entries using the same attribute keys as in the JSON, see also https://kyma-project.io/docs/kyma/latest/01-overview/telemetry/telemetry-02-logs#kubernetes-filter-json-parser
So if the payload is not in JSON you will have the
log
attribute with the payload.If the payload is in JSON you will have the
log
attribute with the payload and additional all parsed JSON root attributes, including the actual log message.With that approach, you will have the content of the payload actually duplicated for the advantage of having the same
log
attribute consistently available across all log records.If that behaviour is not wanted in order to save storage/bandwith, there should be a new flag in the LogPipeline API to disable the duplication, for example like:
To stay compatible, the flag should be introduced first being enabled, so that all existing clusters will have it persisted. In a follow-up ticket the flag will get switched to be disabled by default.
Criterias
Reasons
Log payloads are being duplicated when they are in JSON. That causes unneeded bandwith/storage/performance increase and leads to polluted log records.
Attachments
The text was updated successfully, but these errors were encountered: