Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration of upstream connection limits in Envoy #6829

Merged
merged 6 commits into from
Dec 3, 2019

Conversation

crhino
Copy link
Contributor

@crhino crhino commented Nov 22, 2019

Fixes #6359

This PR adds a limits field to the upstream configuration of a connect envoy proxy. This limits field is intended to contain configuration regarding connections/requests of the given service to the specified upstream, such as max_connections, max_pending_requests, and max_requests.

This is currently only allowed on the proxy.upstreams[*].config part of the service definition, a future improvement would be to allow these upstream configurations to be set in a service's service-defaults central config.

Example:

services {
  name = "s1"
  port = 8080
  connect {
    sidecar_service {
      proxy {
        upstreams = [
          {
            destination_name = "s2"
            local_bind_port = 5000
            config {
              limits {
                max_connections = 3
                max_pending_requests = 4
                max_requests = 5
              }
            }
          }
        ]
      }
    }
  }
}

UX

Some relevant UX decisions I made, looking for feedback on this.

Limits Block

I chose to scope this section under a limits block in order to more concretely define them as related, as well as simplify the implementation. This should be a natural place to add more related configuration if necessary as well.

I considered naming this something to the effect of connection_limits, but felt that was a little too wordy when typing.

max_* Naming

After looking into the big three proxy configuration (Envoy, Nginx, HAProxy), I eventually settled on keeping the naming that Envoy uses for these values.

For max_connections and max_pending_requests, all three had similar configuration values, although they varied in whether those values were set on a per-server or per-{cluster,frontend} basis.

For max_requests, this represents the max allowed in-flight requests to an upstream cluster by Envoy. This is mostly useful in a HTTP/2 context, since HTTP/1.1 requests are mostly bounded by the max_connections parameter. I was not able to find an equivalent configuration option in HAProxy or Nginx.

I also considered changing max_requests to max_inflight_requests or max_concurrent_requests to be more specific in naming, but decided it was better to keep 100% consistent with Envoy than to change a single name.

TODO

  • Update all related documentation with this new configuration value.
  • Decide whether to support service-defaults in this PR or future one.

This allows a user to configure the envoy connect proxy with
'max_connections', 'max_queued_requests', and 'max_requests'. These
values are defined in the local proxy on a per-service instance basis
and should thus NOT be thought of as a global-level or even service-level value.
@crhino crhino added the theme/connect Anything related to Consul Connect, Service Mesh, Side Car Proxies label Nov 22, 2019
@crhino crhino requested a review from a team November 22, 2019 15:45
@crhino
Copy link
Contributor Author

crhino commented Nov 22, 2019

Another high-level question I have regarding this configuration/PR is whether we want to use this opportunity to add support for configuring upstream values like these in service-defaults now, or have that be a future improvement?

From my understanding, centralizing configuration in the service-defaults is the way we want to trend towards, and thus only supporting this in the service definition feels like a slight step backwards.

@banks
Copy link
Member

banks commented Nov 22, 2019

Great job @crhino!

I've not yet dug through code but will soon but want to share initial feedback on the PR description and questions.

Overall I think this looks great and is a sensible place to start. I think the next discussion after this PR is where the right place to add this in service-defaults is (or possible service-resolver but that's a longer conversation...) but let's do that separately.

On the naming choosen, I think all of it is great and appreciate the research and rationale there.

My only suggestion would be that max_requests alone is not super intuitive - it might seem like a hard limit on the total number of requests (which makes no sense) but then could be a maximum number per conection before cycling or per stream or... What do you think about calling it max_requests_inflight or max_concurrent_requests?

whether we want to use this opportunity to add support for configuring upstream values like these in service-defaults now

You're exactly right this is the plan but it's actually way more subtle and difficult that it seems - I have an RFC in progress for it but the sticky part is resolving how upstream ports are assigned and discovered once the definition isn't tied to a specific instance.

So yes that's coming but we shouldn't mix it up here just yet!

@crhino
Copy link
Contributor Author

crhino commented Nov 25, 2019

I agree max_concurrent_requests is more descriptive than max_requests. My only hesitance is around whether to adhere strictly to Envoy naming conventions, or change to be more descriptive.

I don't think I have a strong opinion on this naming, other than I like max_concurrent_requests more than max_requests_inflight. Will update names according and see how it feels.

Copy link
Member

@banks banks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@crhino this looks great!

I had one minor question inline and I'll wait for docs update to approve as it seems worth doing in one PR.

I like this name better personally and I don't think deviation form Envoy naming is necessarily bad - we are trying to find a middle ground that's agnostic enough for any proxy after all so being precise/meaningful is more important to me than copying Envoy. It's a bit better UX but it also hopefully means other proxy implementations end up implementing something closer since the semantics are more precise.

agent/xds/clusters.go Outdated Show resolved Hide resolved
@crhino
Copy link
Contributor Author

crhino commented Nov 26, 2019

Link to the preview docs website

Edit: Just realized you can click the Netlify details and it will take you to the preview. 💥

Copy link
Member

@banks banks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving because I think this is good as is.

Do you think we should add detail to the docs to point out that max connections doesn't really do much for http2 traffic while max requests doesn't really do what you want for http1?

I don't think that's too envoy specific as the way those protocols use connections is likely to be the same in any proxy?

@crhino crhino merged commit f3b54fa into master Dec 3, 2019
@crhino crhino deleted the connect/upstream-connection-limits branch December 3, 2019 20:13
@ghost
Copy link

ghost commented Jan 25, 2020

Hey there,

This issue has been automatically locked because it is closed and there hasn't been any activity for at least 30 days.

If you are still experiencing problems, or still have questions, feel free to open a new one 👍.

@ghost ghost locked and limited conversation to collaborators Jan 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
theme/connect Anything related to Consul Connect, Service Mesh, Side Car Proxies
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Connect: allow configuration of upstream connection pool limits
2 participants