-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for configuring more gRPC client settings #1041
Comments
If this is a valid ask, I would be happy to send a PR for the same. |
I have no experience with this part of gRPC. |
@bogdandrutu Could you please tell what do you think about this? |
Please point me to the config that needs to be changed in grpc. We should support nthis if it is a property in the grpc client DialOptions. |
@bogdandrutu This https://godoc.org/google.golang.org/grpc#WithBalancerName is the config that I have been using here https://github.com/open-telemetry/opentelemetry-collector/blob/master/config/configgrpc/configgrpc.go#L170 in my patched otel collector. |
Sorry that I missed it, yes please make a PR to add the necessary config in the |
* Clean stale indirect dependency requirements In the recent changes to isolate the main `otel` package there were many indirect dependencies of the package that were removed, however, the go.mod was not automatically cleaned of these. This removes those (and similar ones in the otel-collector example and otel exporter) and prunes the go.sum files accordingly. * Run in a clean system to reproduce build
Hi @RashmiRam @bogdandrutu The service is the default ClusterIP and I am using "svc_name.default.svc.local" to connect from the exporters to the collectors and I am seeing one (or few) otel collector PODs doing most of the work and I feel thats because of connection level load balancing not working for gRPC. My question is :
Will appreciate your take on this. Thanks. |
Hello @atibdialpad
Yes. As you rightly said, it will always provide you the service ip and it doesn't matter which load balancer that you choose. The LB is taken care for you at your receiving k8s service side only on connection level. Since gRPC is HTTP/2, you might end up seeing all requests from a client pod goes to single server again.
headless svc will work here since it returns all Pod IPs back and client side grpc lb will do the load balancing based on the balancer that you have configured. |
Is your feature request related to a problem? Please describe.
There is no way to configure load balancer name in the gRPC client settings and the default is pickFirst which won't work in case of gRPC endpoint being simple dns.
Describe the solution you'd like
Allow gRPC client settings like balancerName to be configured via config file.
Describe alternatives you've considered
Nothing that I think of
Additional context
I have a setup where OpenTelemetry collector is running as agent and is configured to have jaeger exporter. The jaeger collectors are behind a DNS. In this case, I require the otel collector to do the load balancing as there is no external load balancer. By default, the gRPC client at otel collector end chooses pickFirst lb and there is no way to configure the lb name in gRPC client settings. So, all the requests are going to a single jaeger collector.
The text was updated successfully, but these errors were encountered: