-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync up logger used in Fleet server with Agent/ Beats #25391
Comments
Both timestamps are RFC3339. The agent timestamp is using local timezone, the fleet server timestamp is in Zulu with microsecond precision. They should align if you take timezone into account in parsing. @EricDavisX Can you provide an example of two things happening at the same time but the stamps are minutes apart? It would be preferable if the agent logged in zulu. |
I don't have the example anymore and it was the best assessment PH and I had at the time that things were not sync'ed up. Maybe we were reading the logs wrong, a bad inference (from a repeated log line or something - it is possible it is all ok except for the minor format and time zone. |
Relooking at this, Elastic Agent is the problem, let me transfer that to tbe beats repository. |
Pinging @elastic/agent (Team:Agent) |
@michalpristas This look like a good issue to pair with @michel-laterman? |
now i wonder if we should align fleet-server to the rest (meaning agent and beats) or the rest to fleet-server, i would prefer changing how fleet-server logs time (https://github.com/elastic/fleet-server/blob/22d2a7651fd631c5caa145043c42c59d42931ee3/internal/pkg/logger/logger.go#L84)
|
The beats should log in rfc-3339 zulu. It is a smaller format and more concise. |
@urso can this change on beats level be considered breaking? |
For Beats itself I would consider this a breaking change. If the log format changes, from a log collector perspective things break. Bug if Beats logs differently when run by Elastic Agent it is a breaking change but one we could still do as we are in beta. So maybe we should adjust it when run by Elastic Agent? |
Most important is that filebeat can pick up json logs and parse the timestamps correctly. In the past the RFC-3339 Zulu was the only supported format. @michalpristas Updated the timestamp parser. It should be able to parse both formats, right?
Not sure how straight forward this is. +1 if its easy to change the timestamp format in our logger. @michalpristas Is the updated timestamp parser aware of timezones? |
I've reassigned it to 7.14, its a bug but I think it's not required for 7.13. |
updated timestamp praser recognizes two formats :
when log timestamp contains information about timezone, this information is not lost. |
In other words, logs indexed in Elasticsearch will all be correctly indexed with the same timestamp and timezeone (Beats also converts timestamps to UTC before indexing) and there will be no offset between Log events from Agent, Beats, and Fleet Server in Elasticsearch. @EricDavisX As Agent is supposed to ship logs to ES, we rather might want to check the logs in Elasticsearch and not on disk for now. Minor formatting differences on disk should be no problem (and we can fix those later). But when shipping Logs, they get parsed and processed by Filebeat before shipping to ES (which might introduce other errors). In ES (Logs UI):
I guess we got too focused on formatting here. After timestamps have been normalized, are there still big gaps between corresponding events in Agent and Fleet-Server? If so, let's file a separate issue, if you think the offsets are too big. |
will do, thanks for the good discussion here. |
We shouldn't change beats but we could change Fleet Server format. |
The Fleet Server component is known to use a different logging construct. We find that in doing so it doesn't allow for easy debugging of Fleet Server / Agent because the logs will show different timestamps.
here is an example from the fleet server log:
"@timestamp":"2021-04-26T17:59:49.824Z"
/opt/Elastic/Agent/data/elastic-agent-9c3fe9/logs/default/fleet-server-json.log
and here is one from the Agent log:
"@timestamp":"2021-04-26T15:21:38.184-0400"
/opt/Elastic/Agent/data/elastic-agent-9c3fe9/logs/elastic-agent-json.log
Note that the formatting is different, besides the fact that events that seemed to happen near the same time frame were cited a being minutes apart, making the non-interleaved files much harder to match up, causing false alarms in some cases with reasonable assumptions made based on the logging information.
The text was updated successfully, but these errors were encountered: