-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APM] Display related trace logs in the trace sample content #67611
Comments
Pinging @elastic/apm-ui (Team:apm) |
Big +1 from me. I see this seems blocked by what looks like Logs Stream app being embeddable. @elastic/logs-metrics-ui do we have an issue we're tracking around converting Logs Stream app to an embeddable component? |
Not yet - the approach probably highly depends on how exactly the embedding UI would have to control and inject the source configuration. |
@weltenwort makes sense. What do you think about a simple first version that takes the same parameters as the link from apm to logs ( <LogStream
timestamp="1590690626648"
filter={encodeURIComponent('trace.id:"0570667f4e27e2cac0d6c5b311c65918"')}
/> In the first version I think it's fine if it's just a static number of log lines. If users want to see more lines, they can click "Open in Logs" which will take them to the logs app. |
An alternative approach could be to specify the time range: <LogStream
startTime="1590690626648"
endTime="1590690636648"
filter={encodeURIComponent('trace.id:"0570667f4e27e2cac0d6c5b311c65918"')}
/> |
@formgeist and I came up with two other contexts (besides per-trace) to consider:
In both cases, the data should actually be small so a more minimal UI might be appropriate, handy if an embeddable logging ui component is a ways off. @formgeist is exploring the possibilities in parallel with this issue. |
Do we annotate log lines with transaction.id/span.id - I thought we only did this for trace.id ? |
The complicated part of embedding the logs is the source configuration, i.e. which log indices to look at and which columns to show. I see two options for how to implement it:
The former is more powerful but also requires more effort and handling of edge-cases. The latter is more restricted but requires fewer changes on the logs stream side. |
In the Java agent, we add |
@weltenwort The first approach definitely sounds more appealing to me since it's stateless and therefore typically less error-prone but if the other approach is significantly easier on your part we might have to go that route. Either way, how can APM know which log indices to link to? |
Thanks @felixbarny . Not having span scoped logs sounds okay to me. |
And how can the Logs UI know? One way out might be to rely on the new indexing strategy, from which we could derive a convention about the data type and namespace. Does the APM config have the concept of a namespace? |
I haven't spent any time looking into the new indexing strategy for apm. "namespace" is the last part like |
I would expect APM to use the same log source the logs UI is using - there is a single data source configuration for logs UI right? That is, logs will be in
This is common with Jaeger and through opentracing and users are already doing their own log correlation using |
I'm thinking through some options for how we can make this as simple as possible. I don't think most apps in Kibana are going to want logs from specific indices, but rather "just please give me 'the logs' that match these ECS-based and/or time-based criteria" so I think we'll need to be able to accommodate that based on a very simple index-based convention. |
@sqren with the way things are set up today, yes I think that's the sanest and simplest approach. If a user has created a separate space in order to look at a specific subsection of logs with a much more custom source, the logs component may not work as well in that use case, but we may be able to live with that. I may be forgetting some problems with this approach but I don't think we need to make the indices customizable for a component like this. |
Agree, I think it should simply read from the indicies specified in Log Settings. In that case what do you think about a Log component with the following interface? <LogStream
timestamp="1590690626648"
filter={encodeURIComponent('trace.id:"0570667f4e27e2cac0d6c5b311c65918"')}
/> |
The log stream is now limited using a time range as well (for performance and uniformity reasons). So we have three timestamps:
The For the sake of keeping things organized, could we formulate the requirements as a response to #70513? |
What is the purpose of
sgtm 👍
Sure, I'll add that |
It's the time that logs stream should be scrolled to. A |
Okay, makes sense 👍 |
Would it make sense to add Notes
|
It might but I don't think it's worth it given the alternatives and overhead. Adding the span ID would significantly increase the overhead, especially if there are many spans (some users might do 100s or 1000s of DB queries per transaction). Depending on the Agent and how it integrates with the loggers, this overhead also occurs if there are no log statements associated with a span. As we already add the Which UI features do you have in mind that could benefit from a |
FYI we are doing additional design work with @formgeist on this topic. |
👋! We have made a first version of an embeddable logs component in #76262 Please take a look and play with it. Feel free to ask me any questions or feature requests :) |
Awesome @afgomez ! I'll ping you later this week and get it implemented in APM. Looks like it's going to be very easy. Thanks for doing this! |
Replaced by #79995 |
Blocked: An embeddable log component from Logs team is required.
Summary
Based on the design solution in elastic/apm#179
We're currently linking out to the Logs app to display the related logs for a specific
trace.id
, but it would be a better experience to show the logs in the context of the trace sample immediately available from within the APM app.Design proposal
Figma prototype link
Possible API
Prerequisites for starting implementation
There's a number of things that can be seen as dependencies for this feature to be implemented and this work should be planned ahead when we prioritize this feature.
The text was updated successfully, but these errors were encountered: