Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServerHttpObservationFilter does not fill pathPattern #32099

Closed
bergerdenes opened this issue Jan 24, 2024 · 12 comments
Closed

ServerHttpObservationFilter does not fill pathPattern #32099

bergerdenes opened this issue Jan 24, 2024 · 12 comments
Assignees
Labels
for: external-project Needs a fix in external project status: invalid An issue that we don't feel is valid

Comments

@bergerdenes
Copy link

bergerdenes commented Jan 24, 2024

Affects: 6.0.15, 6.1.3


This property is necessary for collecting metrics via Micrometer.

In the org.springframework.http.server.observation.DefaultServerRequestObservationConvention class, the protected KeyValue uri(ServerRequestObservationContext context) method tries to get context.getPathPattern() but it is always null.

@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged or decided on label Jan 24, 2024
@bclozel
Copy link
Member

bclozel commented Jan 24, 2024

We have tests that check for this case. Can you elaborate? What's the endpoint? Can your provide a minimal sample application that reproduces the problem?

@bclozel bclozel added the status: waiting-for-feedback We need additional information before we can continue label Jan 24, 2024
@bergerdenes
Copy link
Author

Repro repository: https://github.com/bergerdenes/metrics-repro
Similar issue reported (and others referenced within): #31028

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Jan 24, 2024
@bclozel
Copy link
Member

bclozel commented Jan 24, 2024

I'm sorry I can't reproduce the problem. I ran many requests on my local machine, including using benchmarking tools and I'm seeing:

$ curl localhost:8080/metrics | grep ^http_server_requests_seconds

http_server_requests_seconds{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",quantile="0.5",} 0.0
http_server_requests_seconds{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",quantile="0.95",} 0.0
http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 5.0
http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.066770067
http_server_requests_seconds{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",quantile="0.5",} 0.007856128
http_server_requests_seconds{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",quantile="0.95",} 0.016769024
http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",} 30016.0
http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",} 256.0871663
http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.007237065
http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",} 0.054313366

Thoughts?

@bclozel bclozel added status: waiting-for-feedback We need additional information before we can continue and removed status: feedback-provided Feedback has been provided labels Jan 24, 2024
@bclozel bclozel self-assigned this Jan 24, 2024
@bergerdenes
Copy link
Author

My output for the same:

http_server_requests_active_seconds_max{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.038592375
http_server_requests_active_seconds{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",quantile="0.5",} 0.0385395
http_server_requests_active_seconds{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",quantile="0.95",} 0.038592375
http_server_requests_active_seconds_active_count{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 1.0
http_server_requests_active_seconds_duration_sum{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.038580208
http_server_requests_seconds{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",quantile="0.5",} 0.001409024
http_server_requests_seconds{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",quantile="0.95",} 0.01622016
http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",} 3.0
http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",} 0.018441751
http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/test",} 0.016043917

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Jan 24, 2024
@bclozel
Copy link
Member

bclozel commented Jan 24, 2024

I'm only seeing "UNKNOWN" tags for the http_server_requests_active metric, which is expected. For each observation, the Micrometer metrics handler creates a long task timer that measures ongoing requests. This tag value is expected for those, as tags are collected as soon as the observation starts; at this stage, the instrumentation cannot know on which controller handler the request will be mapped.

That is an interesting point, though. If you believe that this is confusing, or if you'd like to have an option to disable those long task timers, please raise a Micrometer issue in order to discuss this matter with the team. The metrics handler code is managed by Micrometer directly.

In light of that, I'm closing this issue. Thanks!

@bclozel bclozel closed this as not planned Won't fix, can't repro, duplicate, stale Jan 24, 2024
@bclozel bclozel added status: invalid An issue that we don't feel is valid for: external-project Needs a fix in external project and removed status: waiting-for-triage An issue we've not yet triaged or decided on status: feedback-provided Feedback has been provided labels Jan 24, 2024
@bergerdenes
Copy link
Author

@bclozel please reopen the issue.
I have updated the repro project with explanation.

If one calls the "metrics" endpoint first then later you will see UNKNOWN for all of the API endpoints.

@bclozel
Copy link
Member

bclozel commented Jan 24, 2024

That's an entirely different setup now! The application is not using Spring MVC but it's using Jersey.
In this case, the Spring instrumentation cannot know about the URI pattern mapped to the JAX-RS endpoint. Please use the Jersey instrumentation for your application.

@bergerdenes
Copy link
Author

Then what is the explanation for this:

If you change the order (i.e. call a Jersey endpoint first),
$ curl localhost:8080/jersey/test
$ curl localhost:8080/metrics | grep ^http_server_requests
then the metrics are good.

@bclozel
Copy link
Member

bclozel commented Jan 24, 2024

@bergerdenes I don't know. That's probably a bug to be reported to the Jersey instrumentation? The Spring instrumentation is not being used here. Tags are created by the io.micrometer.core.instrument.binder.jersey.server.DefaultJerseyTagsProvider.

Edit: Apologies, in this case the instrumentation is done by Micrometer directly. I was confused because it's also been contributed to Jersey directly.

@bergerdenes
Copy link
Author

I have debugged into DefaultJerseyTagsProvider and the JerseyTags.uri(event) returns the correct value in both cases (i.e. when the "metrics" endpoint is called first and when the "test" endpoint is called first)
So I don't know how to progress forward.
I have also placed breakpoint inside DefaultServerRequestObservationConvention#uri but the context.getPathPattern() is null most of the times.

@bclozel
Copy link
Member

bclozel commented Jan 24, 2024

I think you should try disabling the Spring MVC instrumentation by excluding the org.springframework.boot.actuate.autoconfigure.observation.web.servlet.WebMvcObservationAutoConfiguration. If this works, this means that instrumentations are somehow overstepping each other.

You can do so by updating your application with:

@SpringBootApplication(exclude=WebMvcObservationAutoConfiguration.class)

If this solves the problem, you can create a Spring Boot issue to better figure out this can be better addressed.

@bergerdenes
Copy link
Author

@SpringBootApplication(exclude=WebMvcObservationAutoConfiguration.class)

solved the issue, thanks for the hint.

I hope it is adequate if I just create a SpringBoot issue and reference this issue here...

jenkins-daemon pushed a commit to hortonworks/cloudbreak that referenced this issue Jan 25, 2024
as it clashes with DefaultJerseyTagsProvider, rendering Prometheus
actuator output useless by showing "UNKNOWN" in the `uri` field
instead of the real path pattern.
This is only happening with Jersey-annotated endpoints that we use.
It does work correctly with Spring-annotated endpoints, just for the record.

See issues:
- spring-projects/spring-framework#32099
- spring-projects/spring-boot#39294
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for: external-project Needs a fix in external project status: invalid An issue that we don't feel is valid
Projects
None yet
Development

No branches or pull requests

3 participants