Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lua HTTP Filter - httpCall bottleneck during burst of traffic #37796

Open
laurodd opened this issue Dec 23, 2024 · 1 comment
Open

Lua HTTP Filter - httpCall bottleneck during burst of traffic #37796

laurodd opened this issue Dec 23, 2024 · 1 comment
Labels
area/circuit_breaker area/lua question Questions that are neither investigations, bugs, nor enhancements

Comments

@laurodd
Copy link

laurodd commented Dec 23, 2024

Title: Lua HTTP Filter - httpCall bottleneck during burst of traffic

Description:
We are currently facing a burst of traffic (from 3k RPS until up to 50KRPS).

We use Lua HTTP filter to call a bot protection API for each request and we are getting a high number of 503's from the upstream when calling the cluster.

We tried to tune the envoy upstream cluster via configuration to deal with this volume of connections to the upstream, without success.

However, we found out that having multiple clusters with the same configuration (only adding numbers at the end) and calling them "randomly" on the Lua file improved by a lot the performance and avoided a consequent number of 503s.

  local clusterName = "apicluster" .. math.random(1,8)

  local headers, response_body = request_handle:httpCall(
    clusterName,
    request_header,
    payload_body, API_TIMEOUT
  )

We would like to ask for your help to identify if we are missing a configuration, if we did something wrong or if it makes senses what we did in your point of view.

Below we tried to share the relevant information and if you need anything from us, let us know.

Thanks!

Context

  • Our Envoy setup has a Lua HTTP filter that does an httpCall to an external API (bot detection) to block or not the incoming request before reaching the origin

  • When the burst happens, our configuration seems overwhelmed with the number of requests and start to send 503s

  • We increased the timeout (cluster and the httpCall on the lua filter), it improved the situation, but it did not deal fully with the burst

  • We noticed that vertical scaling also helps, but it did not solve the problem either

Reproduction

We created a dedicated machine to launch the requests with 10K connections:

  • wrk (apt install wrk)
  • ulimit -n 65535
  • wrk -c 10000 -t 4 -d 60s http://{ENVOY_IP}:${ENVOY_PORT}
  clusters:
  - name: apicluster
    connect_timeout: 0.75s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: apicluster

In our case, we can see that our Envoy server (16 VCPU 64 GB) reaches 90+% CPU usage and the stats show a lot of 503's: 1797585 out of 2071053 (86,79%)

  wrk -t 4 -c 10000 -d 60s
  Running 1m test
  4 threads and 10000 connections
  2071053 requests in 1.00m, 1.16GB read
cluster.apicluster.upstream_rq_403: 281907
cluster.apiclister.upstream_rq_4xx: 281907
cluster.apicluster.upstream_rq_503: 1797585
cluster.apicluster.upstream_rq_504: 1209
cluster.apicluster.upstream_rq_5xx: 1798794

Investigation

  • As previously said, increasing the timeout and vertically scaling helped, but did not fully resolve the situation

  • During the reproduction, we noticed that our Envoy server was receiving 10K connections from the wrk server, but the number of connections to the API was not able to grow besides certain level, which for us was causing the issue:

  • We used ss to monitor that:


ss -t -a | grep ESTAB | grep ${WRK_SERVER_IP} | wc -l
10000

ss -t -a | grep ESTAB | grep ${API_SERVER_IPS} | wc -l
3092


  • As previously mentioned, we tried multiple configurations to increase how many connections we were able to send to the API, without much success

Workaround

  • We noticed that Envoy uses a worker for each vCPU and it deals with the requests based on that

  • So, we basically had the idea to replicate the clusters and on the Lua code, the httpCall would do a "round robin" (in this case we did a math.random)

  local clusterName = "apicluster" .. math.random(1,8)

  local headers, response_body = request_handle:httpCall(
    clusterName,
    request_header,
    payload_body, API_TIMEOUT
  )
  clusters:
  - name: apicluster1
    connect_timeout: 0.75s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: apicluster1
...

  clusters:
  - name: apicluster2
    connect_timeout: 0.75s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: apicluster2
...
  • After that we can see that the 503 errors are almost gone and we are able to "ingest" and treat the 10K connections : 1006 / 1587717 (0,06% of 503s)
wrk -t 4 -c 10000 -d 60s 

Running 1m test
  4 threads and 10000 connections
  1587717 requests in 1.00m, 3.04GB read
cluster.apicluster1.upstream_rq_503: 81
cluster.apicluster2.upstream_rq_503: 97
cluster.apicluster3upstream_rq_503: 160
cluster.apicluster4.upstream_rq_503: 89
cluster.qpicluster5.upstream_rq_503: 68
cluster.apicluster6.upstream_rq_503: 242
cluster.apicluster7.upstream_rq_503: 187
cluster.apicluster8.upstream_rq_503: 82
ss -t -a | grep ESTAB | grep ${WRK_SERVER_IP}  | wc -l
10000
ss -t -a | grep ESTAB | grep ${API_SERVER_IPS} | wc -l
9016

Other information:

We took a look on the source code (lua_filer.cc and it seems we have a thread_local_cluster for each cluster and maybe this is scaling better than having just one cluster to deal with everything?

const auto thread_local_cluster = filter.clusterManager().getThreadLocalCluster(cluster);

WRK server

  • Ubuntu 24.04
  • 4 vCPU 16G
  • ulimit -n 65535

Envoy Server

  • EC2 m4.4xlarge 16 vCPU 64 GB
  • Docker version 27.1.2, build d01f264
  • docker run -dit --name envoy-container --network "host" -p 9901:9901
  • Envoy image: v1.31-latest
  • Envoy version: 688c4bb/1.31.5/Clean/RELEASE/BoringSSL
  • Debian GNU/Linux 12 (bookworm)

Envoy Configuration

static_resources:
  listeners:
  - name: main
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 8080
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: auto
          use_remote_address: true
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: web_service
          http_filters:
          - name: envoy.lua
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
              inline_code: |
                  assert(loadfile("/apicluster.lua"))({})
          - name: envoy.router
            typed_config:
              "@type": [type.googleapis.com/envoy.extensions.filters.http.router.v3.Router](http://type.googleapis.com/envoy.extensions.filters.http.router.v3.Router)

  clusters:
  - name: apicluster
    connect_timeout: 0.75s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: apicluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: api.example.co
                port_value: 443
    circuit_breakers:
      thresholds:
        - max_connections: 10000


Lua file

-- we do some header manipulation and 


function envoy_on_request(request_handle)
  local headers = request_handle:headers()

  -- some code

  local clusterName = "apicluster" .. math.random(1,8)

  local headers, response_body = request_handle:httpCall(
    clusterName,
    request_header,
    payload_body, API_TIMEOUT
  )

-- some more code

@laurodd laurodd added the triage Issue requires triage label Dec 23, 2024
@KBaichoo
Copy link
Contributor

KBaichoo commented Dec 24, 2024

Hey @laurodd ,

STM that you are running into circuit breakers tripping. Your work around of adding additional clusters adds additional circuit breakers thus "working around" the issue.

See https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/circuit_breaker.proto#config-cluster-v3-circuitbreakers-thresholds for configuring circuit breakers, you likely need to tune max_request, max_pending_requests.

You can validate that this was the issue by seeing if you see the circuit breaker stats tripping for the cluster:
https://www.envoyproxy.io/docs/envoy/latest/configuration/upstream/cluster_manager/cluster_stats#circuit-breakers-statistics

@KBaichoo KBaichoo added question Questions that are neither investigations, bugs, nor enhancements area/lua area/circuit_breaker and removed triage Issue requires triage labels Dec 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/circuit_breaker area/lua question Questions that are neither investigations, bugs, nor enhancements
Projects
None yet
Development

No branches or pull requests

2 participants