-
Notifications
You must be signed in to change notification settings - Fork 995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Feast Serving gRPC call metrics #509
Conversation
Hi @ashwinath. Thanks for your PR. I'm waiting for a gojek member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @davidheryanto |
Which issue is this in relation to? |
It was an issue @davidheryanto brought up regarding Feast Serving needing to expose a metric to count erronous requests. As of now, the stats from the gRPC calls were not captured, but rather the methods it invoked were captured; missing out on the gRPC error codes. This PR adds the gRPC codes into the labels of |
Yeah currently for Feast Serving these are the metrics we record But it's missing metric about errors, which is quite important because it may indicate something wrong with Feast serving GRPC servers. So I think we should also include GRPC status code label to the metric https://github.com/grpc/grpc/blob/master/doc/statuscodes.md So we can identify how many request Feast serving failed to handle due to different reasons. |
serving/src/main/java/feast/serving/interceptors/GrpcMonitoringInterceptor.java
Show resolved
Hide resolved
Hi, I split them into two metrics. Load tested with 20 records ghz --insecure \
--call feast.serving.ServingService/GetOnlineFeatures localhost:6566 \
--concurrency=10 \
--qps=1000 \
--total=10000 \
--connections=1 \
--data="{
"'"'"features"'"'": {
"'"'"project"'"'": "'"'"your_project_name"'"'",
"'"'"name"'"'": "'"'"city"'"'",
"'"'"version"'"'": 1
},
"'"'"entity_rows"'"'": [
{
"'"'"fields"'"'": {
"'"'"driver_id"'"'": {
"'"'"int64_val"'"'": 1234
}
}
}
]
}" Hovering around P95: 4ms, P99: 5ms for both master branch and this PR. Similar results for 2000 QPS also |
Looks much better, thanks @ashwinath and @davidheryanto |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ashwinath, woop The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
/test test-end-to-end-batch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A test might be good, but if we feel this looks like overkill for a small metrics interceptor, then I think it'd be nice to mention the method name label in the class Javadoc with an example of what it looks like.
* GrpcMonitoringInterceptor intercepts GRPC calls to provide request latency histogram metrics in | ||
* the Prometheus client. | ||
*/ | ||
public class GrpcMonitoringInterceptor implements ServerInterceptor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bikeshedding on naming if you'll bear with me… it's more about request/response metrics than monitoring, monitoring would be what you might externally do with the metrics
Also I wonder if anyone would object to dropping the Grpc
prefix, I imagine everything in the interceptors
package will be a gRPC interceptor so perhaps it's redundant. What do you think of:
public class GrpcMonitoringInterceptor implements ServerInterceptor { | |
public class RequestMetricsInterceptor implements ServerInterceptor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes sounds reasonable to me. I think we need to create a new PR?
Although the sample library for integrating Prometheus with GRPC in Java uses the monitoring term as well :)
https://github.com/grpc-ecosystem/java-grpc-prometheus/tree/master/src/main/java/me/dinowernli/grpc/prometheus
I guess as long as we're consistent.
/lgtm |
What this PR does / why we need it:
Feast Serving does not expose grpc prometheus metrics.
Which issue(s) this PR fixes:
None
Fixes #
Does this PR introduce a user-facing change?:
None