Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Envoy not adding public listener or upstream listener when run with 1.10.1 Consul agent #10714

Closed
dschaaff opened this issue Jul 28, 2021 · 10 comments · Fixed by #10824
Closed
Labels
theme/envoy/xds Related to Envoy support type/bug Feature does not function as expected

Comments

@dschaaff
Copy link

dschaaff commented Jul 28, 2021

Overview of the Issue

I am in the process of testing the upgrade of our Consul Connect setup from 1.9.5 to 1.10.1. After upgrading the Consul agent to 1.10.1 none fo the expected listeners are added to envoy breaking service mesh communication.

Reproduction Steps

Steps to reproduce this issue, eg:

  • Create a cluster with servers on 1.10.1
  • create a node with an agent on 1.9.5

Register connect services. Here is my service config.

{
    "service": {
        "name": "ui",
        "id": "ui",
        "port": 80,
        "address": "10.20.203.196",
    }
}
{
  "service": {
    "name": "ui-sidecar-proxy",
    "id": "ui-sidecar-proxy",
    "port": 20000,
    "address": "10.20.203.196",
    "kind": "connect-proxy",
    "proxy": {
      "destination_service_id": "ui",
      "destination_service_name": "ui",
      "local_service_address": "127.0.0.1",
      "local_service_port": 80,
      "mode": "direct",
      "upstreams": [
        {
          "destination_name": "upstream-1",
          "destination_type": "service",
          "local_bind_port": 2000
        },
        {
          "destination_name": "upstream-2",
          "destination_type": "service",
          "local_bind_port": 3000
        }
      ]
    },
    "meta": {
      "availability_zone": "us-west-2a"
    },
    "checks": [
      {
        "deregister_critical_service_after": "10m",
        "interval": "10s",
        "name": "Proxy Public Listener",
        "tcp": "10.20.203.196:20000"
      },
      {
        "alias_service": "ui",
        "name": "Destination Alias"
      }
    ],
  }
}

In my case, the upstreams are registered on the nodes containing those services.

I then bootstrap the envoy config with the consul connect envoy -envoy-version=1.16.4 -proxy-id=ui-sidecar-proxy -bootstrap on the node. Envoy is then started by a system service. With the 1.9 consul agent envoy is correctly configured with the expected listeners

netstat -tulpn | grep envoy
tcp        0      0 0.0.0.0:9102            0.0.0.0:*               LISTEN      28114/envoy
tcp        0      0 127.0.0.1:2000          0.0.0.0:*               LISTEN      28114/envoy
tcp        0      0 127.0.0.1:3000          0.0.0.0:*               LISTEN      28114/envoy
tcp        0      0 127.0.0.1:19000         0.0.0.0:*               LISTEN      28114/envoy
tcp        0      0 10.20.203.196:20000      0.0.0.0:*               LISTEN      28114/envoy

Next, upgrade the Consul agent to 1.10. After upgrading the agent run the consul connect envoy boostrap command again and then restart envoy with the new bootstrap config. No envoy listeners are created apart from the local admin and prometheus endpoints

netstat -tulpn | grep envoy
tcp        0      0 127.0.0.1:19000         0.0.0.0:*               LISTEN      25468/envoy
tcp        0      0 0.0.0.0:9102            0.0.0.0:*               LISTEN      28114/envoy

At this point no traffic can be sent over the service mesh.

Consul info for both Client and Server

Client info
agent:
	check_monitors = 0
	check_ttls = 0
	checks = 4
	services = 4
build:
	prerelease =
	revision = db839f18
	version = 1.10.1
consul:
	acl = enabled
	known_servers = 3
	server = false
runtime:
	arch = arm64
	cpu_count = 2
	goroutines = 195
	max_procs = 2
	os = linux
	version = go1.16.6
serf_lan:
	coordinate_resets = 0
	encrypted = true
	event_queue = 0
	event_time = 205547
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 647721
	members = 45
	query_queue = 0
	query_time = 1
Server info
agent:
	check_monitors = 0
	check_ttls = 0
	checks = 2
	services = 2
build:
	prerelease =
	revision = db839f18
	version = 1.10.1
consul:
	acl = enabled
	bootstrap = false
	known_datacenters = 1
	leader = false
	leader_addr = 10.20.208.112:8300
	server = true
raft:
	applied_index = 373096103
	commit_index = 373096103
	fsm_pending = 0
	last_contact = 4.978615ms
	last_log_index = 373096103
	last_log_term = 388662
	last_snapshot_index = 373087177
	last_snapshot_term = 388662
	latest_configuration = [{Suffrage:Voter ID:6cd485d0-1ccc-5049-8105-79abb943105e Address:10.20.209.189:8300} {Suffrage:Voter ID:b254668f-b57e-f763-5f3e-3f5ca38ecbc5 Address:10.20.208.112:8300} {Suffrage:Voter ID:e0e2704e-96c9-589d-c4eb-02f0ecf11bd2 Address:10.20.202.159:8300}]
	latest_configuration_index = 0
	num_peers = 2
	protocol_version = 3
	protocol_version_max = 3
	protocol_version_min = 0
	snapshot_version_max = 1
	snapshot_version_min = 0
	state = Follower
	term = 388662
runtime:
	arch = arm64
	cpu_count = 2
	goroutines = 1478
	max_procs = 2
	os = linux
	version = go1.16.6
serf_lan:
	coordinate_resets = 0
	encrypted = true
	event_queue = 0
	event_time = 205547
	failed = 1
	health_score = 0
	intent_queue = 0
	left = 1
	member_time = 647725
	members = 47
	query_queue = 0
	query_time = 1
serf_wan:
	coordinate_resets = 0
	encrypted = true
	event_queue = 0
	event_time = 1
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 1610
	members = 3
	query_queue = 0
	query_time = 1

Operating system and Environment details

This affects both our Amazon Linux 2 VMs as well as the sidecars injected by consul-k8s v0.25.0 in our Kubernetes clusters.

I'm happy to provide any additional info needed.

@dschaaff
Copy link
Author

FYI, I got a notification that issue label workflow failed on this.

@dnephin dnephin added theme/connect Anything related to Consul Connect, Service Mesh, Side Car Proxies theme/envoy/xds Related to Envoy support type/bug Feature does not function as expected and removed theme/connect Anything related to Consul Connect, Service Mesh, Side Car Proxies labels Jul 29, 2021
@jkirschner-hashicorp
Copy link
Contributor

Hi @dschaaff,

Thanks for reaching out! There are a few pieces of additional information that would help us dig into this further:

  • Can you confirm the version of envoy on your path (send output of envoy --version)? We should make sure it's actually 1.16.4 (since -envoy-version=1.16.4 tells Consul how to interact with the running Envoy proxy, but has no effect on which version of Envoy is running).
  • You mention that you are using consul-k8s. Can you share a scrubbed version of your values.yaml file?
  • Can you share relevant sections of Envoy debug logs?

@jkirschner-hashicorp jkirschner-hashicorp added the waiting-reply Waiting on response from Original Poster or another individual in the thread label Aug 2, 2021
@dschaaff
Copy link
Author

dschaaff commented Aug 2, 2021

envoy version

 envoy --version

envoy  version: bf5d0eb44b781ac26ff1513700bcb114b7cf4300/1.16.4/Clean/RELEASE/BoringSSL

Values file for consul-helm

fullnameOverride: consul
# Available parameters and their default values for the Consul chart.

global:
  enabled: false
  domain: consul
  image: "xxxxx.dkr.ecr.us-west-2.amazonaws.com/consul:1.10.1"
  imageK8S: "xxxx.dkr.ecr.us-west-2.amazonaws.com/consul-k8s:0.25.0"
  imageEnvoy: "xxxxx.dkr.ecr.us-west-2.amazonaws.com/envoy:v1.16.4"
  datacenter: xxxx
  enablePodSecurityPolicies: false
  gossipEncryption:
    secretName: consul-secrets
    secretKey: gossip-encryption-key
  tls:
    enabled: true
    enableAutoEncrypt: true
    serverAdditionalDNSSANs: []
    serverAdditionalIPSANs: []
    verify: true
    httpsOnly: false
    caCert:
      secretName: consul-secrets
      secretKey: ca.crt
    caKey:
      secretName: null
      secretKey: null

server:
  enabled: false
externalServers:
  enabled: true
  hosts: [xxxxxxxxxx]
  httpsPort: 443
  tlsServerName: null
  useSystemRoots: true
client:
  enabled: true
  image: null
  join:
    - "provider=aws tag_key=consul-datacenter tag_value=xxxxxx"
  grpc: true
  exposeGossipPorts: false
  resources:
    requests:
      memory: "400Mi"
      cpu: "200m"
    limits:
      cpu: "500m"
      memory: "400Mi"
  extraConfig: |
    {
      "telemetry": {
        "disable_hostname": true,
        "prometheus_retention_time": "6h"
      }
    }
  extraVolumes:
    - type: secret
      name: consul-secrets
      load: false
    - type: secret
      name: consul-acl-config
      load: true 
  tolerations: ""
  nodeSelector: null
  annotations: null
  extraEnvironmentVars:
    CONSUL_HTTP_TOKEN_FILE: /consul/userconfig/consul-secrets/consul.token

dns:
  enabled: true

ui:
  enabled: false

syncCatalog:
  enabled: true
  image: null
  default: true # true will sync by default, otherwise requires annotation
  toConsul: true
  toK8S: false
  k8sPrefix: null
  consulPrefix: null
  k8sTag: k8s-cluster-name
  syncClusterIPServices: true
  nodePortSyncType: ExternalFirst
  aclSyncToken:
    secretName: consul-secrets
    secretKey: consul-k8s-sync.token

connectInject:
  enabled: true
  default: false
  resources:
    requests:
      memory: "500Mi"
      cpu: "100m"
    limits:
      memory: "750Mi"
  healthChecks:
    enabled: true
    reconcilePeriod: "1m"
  overrideAuthMethodName: kubernetes
  aclInjectToken:
    secretName: consul-secrets
    secretKey: consul.token
  centralConfig:
    enabled: true

  sidecarProxy:
    resources:
      requests:
        memory: 150Mi
        cpu: 100m
      limits:
        memory: 150Mi

On non k8s nodes I run envoy inside of a docker container managed by systemd

Systemd unit file


[Unit]
Description="Envoy HashiCorp Consul - A service mesh solution"
Documentation=https://envoyproxy.io/
After=docker.service
Requires=docker.service
ConditionFileNotEmpty=/etc/consul/consul.d/service_redacted-sidecar-proxy.json

[Service]
TimeoutStartSec=0
Restart=always
Type=simple
Environment=CONSUL_HTTP_ADDR=https://127.0.0.1:8501
Environment=CONSUL_GRPC_ADDR=https://127.0.0.1:8502
Environment=CONSUL_CACERT=/etc/pki/ca-trust/source/anchors/consul-connect-ca.crt
Environment=CONSUL_HTTP_TOKEN=redacted
ExecStartPre=/bin/bash -c '/usr/local/bin/consul connect envoy -envoy-version=1.16.4 -proxy-id=redacted-sidecar-proxy -bootstrap > /tmp/envoy-bootstrap.yaml'
ExecStart=/usr/bin/docker run --rm -v /tmp/envoy-bootstrap.yaml:/tmp/envoy-bootstrap.yaml -v /etc/pki/ca-trust/source/anchors/consul-connect-ca.crt:/etc/pki/ca-trust/source/anchors/consul-connect-ca.crt --network host --cap-add NET_BIND_SERVICE --cpu-shares 100 --ulimit nofile=65535 -e CONSUL_HTTP_ADDR=https://127.0.0.1:8501 -e CONSUL_GRPC_ADDR=https://127.0.0.1:8502 -e CONSUL_CACERT=/etc/pki/ca-trust/source/anchors/consul-connect-ca.crt -e CONSUL_HTTP_TOKEN=redacted -e ENVOY_UID=0 --name envoy 960048260646.dkr.ecr.us-west-2.amazonaws.com/envoy:v1.16.4 envoy --config-path /tmp/envoy-bootstrap.yaml 
ExecStop=/usr/bin/docker stop envoy
Restart=on-failure
[Install]
WantedBy=multi-user.target

Envoy bootstrap configs

output of bootstrap on consul 1.9.5

{
  "admin": {
    "access_log_path": "/dev/null",
    "address": {
      "socket_address": {
        "address": "127.0.0.1",
        "port_value": 19000
      }
    }
  },
  "node": {
    "cluster": "redacted",
    "id": "redacted",
    "metadata": {
      "namespace": "default",
      "envoy_version": "1.16.4"
    }
  },
  "static_resources": {
    "clusters": [
      {
        "name": "local_agent",
        "connect_timeout": "1s",
        "type": "STATIC",
        "tls_context": {
          "common_tls_context": {
            "validation_context": {
              "trusted_ca": {
                "inline_string": "redacted"
              }
            }
          }
        },
        "http2_protocol_options": {},
        "hosts": [
          {
            "socket_address": {
              "address": "127.0.0.1",
              "port_value": 8502
            }
          }
        ]
      },
      {
        "name": "self_admin",
        "connect_timeout": "5s",
        "type": "STATIC",
        "http_protocol_options": {},
        "hosts": [
          {
            "socket_address": {
              "address": "127.0.0.1",
              "port_value": 19000
            }
          }
        ]
      }
    ],
    "listeners": [
      {
        "name": "envoy_prometheus_metrics_listener",
        "address": {
          "socket_address": {
            "address": "0.0.0.0",
            "port_value": 9102
          }
        },
        "filter_chains": [
          {
            "filters": [
              {
                "name": "envoy.http_connection_manager",
                "config": {
                  "stat_prefix": "envoy_prometheus_metrics",
                  "codec_type": "HTTP1",
                  "route_config": {
                    "name": "self_admin_route",
                    "virtual_hosts": [
                      {
                        "name": "self_admin",
                        "domains": [
                          "*"
                        ],
                        "routes": [
                          {
                            "match": {
                              "path": "/metrics"
                            },
                            "route": {
                              "cluster": "self_admin",
                              "prefix_rewrite": "/stats/prometheus"
                            }
                          },
                          {
                            "match": {
                              "prefix": "/"
                            },
                            "direct_response": {
                              "status": 404
                            }
                          }
                        ]
                      }
                    ]
                  },
                  "http_filters": [
                    {
                      "name": "envoy.router"
                    }
                  ]
                }
              }
            ]
          }
        ]
      }
    ]
  },
  "stats_config": {
    "stats_tags": [
      {
        "regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.custom_hash"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.service_subset"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.service"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.namespace"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.datacenter"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.routing_type"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
        "tag_name": "consul.destination.trust_domain"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.target"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
        "tag_name": "consul.destination.full_target"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.(([^.]+)(?:\\.[^.]+)?\\.[^.]+\\.)",
        "tag_name": "consul.upstream.service"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.([^.]+(?:\\.[^.]+)?\\.([^.]+)\\.)",
        "tag_name": "consul.upstream.datacenter"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.([^.]+(?:\\.([^.]+))?\\.[^.]+\\.)",
        "tag_name": "consul.upstream.namespace"
      },
      {
        "regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.custom_hash"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.service_subset"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.service"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.namespace"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.datacenter"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.routing_type"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
        "tag_name": "consul.trust_domain"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.target"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
        "tag_name": "consul.full_target"
      },
      {
        "tag_name": "local_cluster",
        "fixed_value": "redacted"
      },
      {
        "tag_name": "consul.source.service",
        "fixed_value": "redacted"
      },
      {
        "tag_name": "consul.source.namespace",
        "fixed_value": "default"
      },
      {
        "tag_name": "consul.source.datacenter",
        "fixed_value": "redacted"
      }
    ],
    "use_all_default_tags": true
  },
  "dynamic_resources": {
    "lds_config": {
      "ads": {}
    },
    "cds_config": {
      "ads": {}
    },
    "ads_config": {
      "api_type": "GRPC",
      "grpc_services": {
        "initial_metadata": [
          {
            "key": "x-consul-token",
            "value": "redacted"
          }
        ],
        "envoy_grpc": {
          "cluster_name": "local_agent"
        }
      }
    }
  }
}

output of bootstrap on consul 1.10.1

{
  "admin": {
    "access_log_path": "/dev/null",
    "address": {
      "socket_address": {
        "address": "127.0.0.1",
        "port_value": 19000
      }
    }
  },
  "node": {
    "cluster": "redacted",
    "id": "redacted-sidecar-proxy",
    "metadata": {
      "namespace": "default",
      "envoy_version": "1.16.4"
    }
  },
  "static_resources": {
    "clusters": [
      {
        "name": "local_agent",
        "ignore_health_on_host_removal": false,
        "connect_timeout": "1s",
        "type": "STATIC",
        "transport_socket": {
          "name": "tls",
          "typed_config": {
            "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext",
            "common_tls_context": {
              "validation_context": {
                "trusted_ca": {
                  "inline_string": "redacted"
                }
              }
            }
          }
        },
        "http2_protocol_options": {},
        "loadAssignment": {
          "clusterName": "local_agent",
          "endpoints": [
            {
              "lbEndpoints": [
                {
                  "endpoint": {
                    "address": {
                      "socket_address": {
                        "address": "127.0.0.1",
                        "port_value": 8502
                      }
                    }
                  }
                }
              ]
            }
          ]
        }
      },
      {
        "name": "self_admin",
        "ignore_health_on_host_removal": false,
        "connect_timeout": "5s",
        "type": "STATIC",
        "http_protocol_options": {},
        "loadAssignment": {
          "clusterName": "self_admin",
          "endpoints": [
            {
              "lbEndpoints": [
                {
                  "endpoint": {
                    "address": {
                      "socket_address": {
                        "address": "127.0.0.1",
                        "port_value": 19000
                      }
                    }
                  }
                }
              ]
            }
          ]
        }
      }
    ],
    "listeners": [
      {
        "name": "envoy_prometheus_metrics_listener",
        "address": {
          "socket_address": {
            "address": "0.0.0.0",
            "port_value": 9102
          }
        },
        "filter_chains": [
          {
            "filters": [
              {
                "name": "envoy.filters.network.http_connection_manager",
                "typedConfig": {
                  "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
                  "stat_prefix": "envoy_prometheus_metrics",
                  "codec_type": "HTTP1",
                  "route_config": {
                    "name": "self_admin_route",
                    "virtual_hosts": [
                      {
                        "name": "self_admin",
                        "domains": [
                          "*"
                        ],
                        "routes": [
                          {
                            "match": {
                              "path": "/metrics"
                            },
                            "route": {
                              "cluster": "self_admin",
                              "prefix_rewrite": "/stats/prometheus"
                            }
                          },
                          {
                            "match": {
                              "prefix": "/"
                            },
                            "direct_response": {
                              "status": 404
                            }
                          }
                        ]
                      }
                    ]
                  },
                  "http_filters": [
                    {
                      "name": "envoy.filters.http.router"
                    }
                  ]
                }
              }
            ]
          }
        ]
      }
    ]
  },
  "stats_config": {
    "stats_tags": [
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.custom_hash"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.service_subset"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.service"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.namespace"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.datacenter"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.routing_type"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
        "tag_name": "consul.destination.trust_domain"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.destination.target"
      },
      {
        "regex": "^cluster\\.(?:passthrough~)?(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
        "tag_name": "consul.destination.full_target"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.(([^.]+)(?:\\.[^.]+)?\\.[^.]+\\.)",
        "tag_name": "consul.upstream.service"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.([^.]+(?:\\.[^.]+)?\\.([^.]+)\\.)",
        "tag_name": "consul.upstream.datacenter"
      },
      {
        "regex": "^(?:tcp|http)\\.upstream\\.([^.]+(?:\\.([^.]+))?\\.[^.]+\\.)",
        "tag_name": "consul.upstream.namespace"
      },
      {
        "regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.custom_hash"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.service_subset"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.service"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.namespace"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.datacenter"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.routing_type"
      },
      {
        "regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
        "tag_name": "consul.trust_domain"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
        "tag_name": "consul.target"
      },
      {
        "regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
        "tag_name": "consul.full_target"
      },
      {
        "tag_name": "local_cluster",
        "fixed_value": "redacted"
      },
      {
        "tag_name": "consul.source.service",
        "fixed_value": "redacted"
      },
      {
        "tag_name": "consul.source.namespace",
        "fixed_value": "default"
      },
      {
        "tag_name": "consul.source.datacenter",
        "fixed_value": "redacted"
      }
    ],
    "use_all_default_tags": true
  },
  "dynamic_resources": {
    "lds_config": {
      "ads": {},
      "resource_api_version": "V3"
    },
    "cds_config": {
      "ads": {},
      "resource_api_version": "V3"
    },
    "ads_config": {
      "api_type": "DELTA_GRPC",
      "transport_api_version": "V3",
      "grpc_services": {
        "initial_metadata": [
          {
            "key": "x-consul-token",
            "value": "redacted"
          }
        ],
        "envoy_grpc": {
          "cluster_name": "local_agent"
        }
      }
    }
  }
}

envoy logs

The main thing I notice in the envoy logs is that it only sets up the prometheus listener and the local admin admin listener, though I could definitely be missing something.


Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal systemd[1]: Started "Envoy HashiCorp Consul - A service mesh solution".
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:305] initializing epoch 0 (base id=0, hot restart version=11.120)
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:307] statically linked extensions:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.compression.decompressor: envoy.compression.gzip.decompressor
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.udp_packet_writers: udp_default_writer, udp_gso_batch_writer
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.thrift_proxy.transports: auto, framed, header, unframed
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.quic_server_codec: quiche
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.serializers: dubbo.hessian2
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.health_checkers: envoy.health_checkers.redis
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.route_matchers: default
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, tls
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.quic_client_codec: quiche
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.udp_listeners: quiche_quic_listener, raw_udp_listener
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.compression.compressor: envoy.compression.gzip.compressor
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.http.cache: envoy.extensions.http.cache.simple
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.resolvers: envoy.ip
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.protocols: dubbo
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.upstreams: envoy.filters.connection_pools.http.generic, envoy.filters.connection_pools.http.http, envoy.filters.connection_pools.http.tcp
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.bootstrap: envoy.extensions.network.socket_interface.default_socket_interface
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.795][1][info][main] [source/server/server.cc:309]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.805][1][info][main] [source/server/server.cc:325] HTTP header map info:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][info][main] [source/server/server.cc:328]   request header map: 608 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][info][main] [source/server/server.cc:328]   request trailer map: 128 bytes:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][info][main] [source/server/server.cc:328]   response header map: 424 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.806][1][info][main] [source/server/server.cc:328]   response trailer map: 152 bytes: grpc-message,grpc-status
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.808][1][debug][main] [source/server/overload_manager_impl.cc:264] No overload action is configured for envoy.overload_actions.shrink_heap.
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.809][1][debug][main] [source/server/overload_manager_impl.cc:264] No overload action is configured for envoy.overload_actions.stop_accepting_connections.
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.809][1][debug][main] [source/server/overload_manager_impl.cc:264] No overload action is configured for envoy.overload_actions.stop_accepting_connections.
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.809][1][info][main] [source/server/server.cc:448] admin address: 127.0.0.1:19000
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.810][1][info][main] [source/server/server.cc:583] runtime: layers:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: - name: base
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: static_layer:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: {}
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: - name: admin
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: admin_layer:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: {}
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.810][1][info][config] [source/server/configuration_impl.cc:95] loading tracing configuration
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.810][1][info][config] [source/server/configuration_impl.cc:70] loading 0 static secret(s)
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.810][1][info][config] [source/server/configuration_impl.cc:76] loading 2 cluster(s)
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.815][14][debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.840][1][debug][upstream] [source/common/upstream/upstream_impl.cc:286] transport socket match, socket default selected for host with address 127.0.0.1:8502
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.865][1][debug][upstream] [source/common/upstream/upstream_impl.cc:286] transport socket match, socket default selected for host with address 127.0.0.1:19000
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.865][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster local_agent
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster self_admin
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/upstream_impl.cc:991] initializing Primary cluster local_agent completed
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][init] [source/common/init/manager_impl.cc:49] init manager Cluster local_agent contains no targets
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster local_agent initialized, notifying ClusterImplBase
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster local_agent added 1 removed 0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:107] cm init: init complete: cluster=local_agent primary=0 secondary=0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:79] cm init: adding: cluster=local_agent primary=0 secondary=0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/upstream_impl.cc:991] initializing Primary cluster self_admin completed
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][init] [source/common/init/manager_impl.cc:49] init manager Cluster self_admin contains no targets
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster self_admin initialized, notifying ClusterImplBase
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster self_admin added 1 removed 0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:107] cm init: init complete: cluster=self_admin primary=0 secondary=0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:79] cm init: adding: cluster=self_admin primary=0 secondary=0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 1
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][config] [bazel-out/aarch64-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:56] Establishing new gRPC bidi stream for rpc DeltaAggregatedResources(stream .envoy.service.discovery.v3.DeltaDiscoveryRequest) returns (stream .envoy.service.discovery.v3.DeltaDiscoveryResponse);
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][router] [source/common/router/router.cc:429] [C0][S14774680687143312245] cluster 'local_agent' match for URL '/envoy.service.discovery.v3.AggregatedDiscoveryService/DeltaAggregatedResources'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][router] [source/common/router/router.cc:586] [C0][S14774680687143312245] router decoding headers:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'POST'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/envoy.service.discovery.v3.AggregatedDiscoveryService/DeltaAggregatedResources'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', 'local_agent'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':scheme', 'https'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'te', 'trailers'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'application/grpc'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-consul-token', 'redacted'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-internal', 'true'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-for', '10.20.212.252'
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][pool] [source/common/http/conn_pool_base.cc:71] queueing stream due to no available connections
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][pool] [source/common/conn_pool/conn_pool_base.cc:104] creating a new connection
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][client] [source/common/http/codec_client.cc:39] [C0] connecting
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][connection] [source/common/network/connection_impl.cc:769] [C0] connecting to 127.0.0.1:8502
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.866][1][debug][connection] [source/common/network/connection_impl.cc:785] [C0] connection in progress
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.871][1][debug][http2] [source/common/http/http2/codec_impl.cc:1179] [C0] updating connection-level initial window size to 268435456
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.871][1][info][config] [source/server/configuration_impl.cc:80] loading 1 listener(s)
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.871][1][debug][config] [source/server/configuration_impl.cc:82] listener #0:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.871][1][debug][config] [source/server/listener_manager_impl.cc:395] begin add/update listener: name=envoy_prometheus_metrics_listener hash=2213782915611572416
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.871][1][debug][config] [source/server/listener_manager_impl.cc:432] use full listener update path for listener name=envoy_prometheus_metrics_listener hash=2213782915611572416
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.872][1][debug][config] [source/server/listener_manager_impl.cc:95]   filter #0:
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.872][1][debug][config] [source/server/listener_manager_impl.cc:96]     name: envoy.filters.network.http_connection_manager
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.872][1][debug][config] [source/server/listener_manager_impl.cc:103]   config: {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "http_filters": [
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "name": "envoy.filters.http.router"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: }
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ],
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "stat_prefix": "envoy_prometheus_metrics",
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "route_config": {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "name": "self_admin_route",
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "virtual_hosts": [
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "routes": [
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "route": {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "cluster": "self_admin",
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "prefix_rewrite": "/stats/prometheus"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: },
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "match": {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "path": "/metrics"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: }
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: },
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "direct_response": {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "status": 404
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: },
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "match": {
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "prefix": "/"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: }
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: }
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ],
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "domains": [
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "*"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ],
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "name": "self_admin"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: }
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ]
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: },
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: "codec_type": "HTTP1"
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: }
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:528]     http filter #0
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:550]       name: envoy.filters.http.router
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:557]     config: {}
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/server/filter_chain_manager_impl.cc:215] new fc_contexts has 1 filter chains, including 1 newly built
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][init] [source/common/init/manager_impl.cc:24] added target Listener-init-target envoy_prometheus_metrics_listener to init manager Server
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/server/listener_impl.cc:107] Create listen socket for listener envoy_prometheus_metrics_listener on address 0.0.0.0:9102
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/server/listener_impl.cc:97] Set listener envoy_prometheus_metrics_listener socket factory local address to 0.0.0.0:9102
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][config] [source/server/listener_impl.cc:653] add active listener: name=envoy_prometheus_metrics_listener, hash=2213782915611572416, address=0.0.0.0:9102
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][info][config] [source/server/configuration_impl.cc:121] loading stats sink configuration
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][init] [source/common/init/manager_impl.cc:24] added target LDS to init manager Server
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][init] [source/common/init/manager_impl.cc:49] init manager RTDS contains no targets
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][init] [source/common/init/watcher_impl.cc:14] init manager RTDS initialized, notifying RTDS
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][info][runtime] [source/common/runtime/runtime_impl.cc:421] RTDS has finished initialization
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:196] continue initializing secondary clusters
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 2
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:151] maybe finish initialize secondary init clusters empty: true
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:172] maybe finish initialize cds api ready: true
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.876][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:174] cm init: initializing cds
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.877][1][warning][main] [source/server/server.cc:565] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.877][1][info][main] [source/server/server.cc:679] starting main dispatch loop
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.877][1][debug][connection] [source/common/network/connection_impl.cc:625] [C0] connected
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.881][1][debug][client] [source/common/http/codec_client.cc:77] [C0] connected
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.881][1][debug][pool] [source/common/conn_pool/conn_pool_base.cc:205] [C0] attaching to next stream
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.881][1][debug][pool] [source/common/conn_pool/conn_pool_base.cc:126] [C0] creating stream
Aug 02 22:38:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:54.881][1][debug][router] [source/common/router/upstream_request.cc:362] [C0][S14774680687143312245] pool ready
Aug 02 22:38:59 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:59.885][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:38:59 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:38:59.885][1][debug][main] [source/server/server.cc:200] Envoy is not fully initialized, skipping histogram merge and flushing stats
Aug 02 22:39:04 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:04.895][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:04 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:04.895][1][debug][main] [source/server/server.cc:200] Envoy is not fully initialized, skipping histogram merge and flushing stats
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 4
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:151] maybe finish initialize secondary init clusters empty: true
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:172] maybe finish initialize cds api ready: true
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:178] cm init: all clusters initialized
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][info][main] [source/server/server.cc:660] all clusters initialized. initializing init manager
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][init] [source/common/init/manager_impl.cc:53] init manager Server initializing
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][init] [source/common/init/target_impl.cc:15] init manager Server initializing target Listener-init-target envoy_prometheus_metrics_listener
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][init] [source/common/init/manager_impl.cc:49] init manager Listener-local-init-manager envoy_prometheus_metrics_listener 2213782915611572416 contains no targets
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][init] [source/common/init/watcher_impl.cc:14] init manager Listener-local-init-manager envoy_prometheus_metrics_listener 2213782915611572416 initialized, notifying Listener-local-init-watcher envoy_prometheus_metrics_listener
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][init] [source/common/init/watcher_impl.cc:14] target Listener-init-target envoy_prometheus_metrics_listener initialized, notifying init manager Server
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.885][1][debug][init] [source/common/init/target_impl.cc:15] init manager Server initializing target LDS
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.905][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:09 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:09.905][1][debug][main] [source/server/server.cc:200] Envoy is not fully initialized, skipping histogram merge and flushing stats
Aug 02 22:39:14 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:14.916][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:14 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:14.916][1][debug][main] [source/server/server.cc:200] Envoy is not fully initialized, skipping histogram merge and flushing stats
Aug 02 22:39:19 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:19.925][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:19 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:19.925][1][debug][main] [source/server/server.cc:200] Envoy is not fully initialized, skipping histogram merge and flushing stats
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.895][1][debug][init] [source/common/init/watcher_impl.cc:14] target LDS initialized, notifying init manager Server
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.895][1][debug][init] [source/common/init/watcher_impl.cc:14] init manager Server initialized, notifying RunHelper
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.895][1][info][config] [source/server/listener_manager_impl.cc:888] all dependencies initialized. starting workers
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.895][1][debug][config] [source/server/listener_manager_impl.cc:899] starting worker 0
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.895][1][debug][config] [source/server/listener_manager_impl.cc:899] starting worker 1
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][17][debug][main] [source/server/worker_impl.cc:124] worker entering dispatch loop
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][17][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster local_agent
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][18][debug][main] [source/server/worker_impl.cc:124] worker entering dispatch loop
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][17][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster self_admin
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][18][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster local_agent
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][17][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster local_agent added 1 removed 0
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][17][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster self_admin added 1 removed 0
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][18][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster self_admin
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][18][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster local_agent added 1 removed 0
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][18][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster self_admin added 1 removed 0
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][19][debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.896][20][debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
Aug 02 22:39:24 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:24.936][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:29 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:29.946][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][conn_handler] [source/server/connection_handler_impl.cc:476] [C1] new connection
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][http] [source/common/http/conn_manager_impl.cc:225] [C1] new stream
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][http] [source/common/http/conn_manager_impl.cc:837] [C1][S15232094548087231014] request headers complete (end_stream=true):
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/metrics'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][http] [source/common/http/filter_manager.cc:721] [C1][S15232094548087231014] request end stream
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][router] [source/common/router/router.cc:429] [C1][S15232094548087231014] cluster 'self_admin' match for URL '/metrics'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][router] [source/common/router/router.cc:586] [C1][S15232094548087231014] router decoding headers:
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':scheme', 'http'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', '8facc35f-53e7-4b5b-b458-c911670b91f5'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][pool] [source/common/http/conn_pool_base.cc:71] queueing stream due to no available connections
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:104] creating a new connection
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][client] [source/common/http/codec_client.cc:39] [C2] connecting
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][connection] [source/common/network/connection_impl.cc:769] [C2] connecting to 127.0.0.1:19000
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][1][debug][conn_handler] [source/server/connection_handler_impl.cc:476] [C3] new connection
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.483][17][debug][connection] [source/common/network/connection_impl.cc:785] [C2] connection in progress
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][17][debug][connection] [source/common/network/connection_impl.cc:625] [C2] connected
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][17][debug][client] [source/common/http/codec_client.cc:77] [C2] connected
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:205] [C2] attaching to next stream
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:126] [C2] creating stream
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][17][debug][router] [source/common/router/upstream_request.cc:362] [C1][S15232094548087231014] pool ready
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][1][debug][http] [source/common/http/conn_manager_impl.cc:225] [C3] new stream
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][1][debug][http] [source/common/http/conn_manager_impl.cc:837] [C3][S14570225305165994883] request headers complete (end_stream=true):
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', '8facc35f-53e7-4b5b-b458-c911670b91f5'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-length', '0'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][1][debug][http] [source/common/http/filter_manager.cc:721] [C3][S14570225305165994883] request end stream
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.485][1][debug][admin] [source/server/admin/admin_filter.cc:66] [C3][S14570225305165994883] request complete: path: /stats/prometheus
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.494][1][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C3][S14570225305165994883] encoding headers via codec (end_stream=false):
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:39:30 GMT'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.494][17][debug][router] [source/common/router/router.cc:1178] [C1][S15232094548087231014] upstream headers complete: end_stream=false
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.496][17][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C1][S15232094548087231014] encoding headers via codec (end_stream=false):
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:39:30 GMT'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-upstream-service-time', '9'
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.496][17][debug][client] [source/common/http/codec_client.cc:109] [C2] response complete
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.497][17][debug][pool] [source/common/http/http1/conn_pool.cc:50] [C2] response complete
Aug 02 22:39:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:30.497][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:151] [C2] destroying stream: 0 remaining
Aug 02 22:39:34 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:34.956][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:39 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:39.966][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:44 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:44.975][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][http] [source/common/http/conn_manager_impl.cc:225] [C1] new stream
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][http] [source/common/http/conn_manager_impl.cc:837] [C1][S896893015724333923] request headers complete (end_stream=true):
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/metrics'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][http] [source/common/http/filter_manager.cc:721] [C1][S896893015724333923] request end stream
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][router] [source/common/router/router.cc:429] [C1][S896893015724333923] cluster 'self_admin' match for URL '/metrics'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][router] [source/common/router/router.cc:586] [C1][S896893015724333923] router decoding headers:
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':scheme', 'http'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', 'ff360203-4fda-49e9-affb-793d46fbc272'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:174] [C2] using existing connection
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:126] [C2] creating stream
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][17][debug][router] [source/common/router/upstream_request.cc:362] [C1][S896893015724333923] pool ready
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][1][debug][http] [source/common/http/conn_manager_impl.cc:225] [C3] new stream
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][1][debug][http] [source/common/http/conn_manager_impl.cc:837] [C3][S12329617817177308722] request headers complete (end_stream=true):
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', 'ff360203-4fda-49e9-affb-793d46fbc272'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-length', '0'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][1][debug][http] [source/common/http/filter_manager.cc:721] [C3][S12329617817177308722] request end stream
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.483][1][debug][admin] [source/server/admin/admin_filter.cc:66] [C3][S12329617817177308722] request complete: path: /stats/prometheus
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.495][1][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C3][S12329617817177308722] encoding headers via codec (end_stream=false):
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:39:45 GMT'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.496][17][debug][router] [source/common/router/router.cc:1178] [C1][S896893015724333923] upstream headers complete: end_stream=false
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.496][17][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C1][S896893015724333923] encoding headers via codec (end_stream=false):
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:39:45 GMT'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-upstream-service-time', '13'
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.496][17][debug][client] [source/common/http/codec_client.cc:109] [C2] response complete
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.496][17][debug][pool] [source/common/http/http1/conn_pool.cc:50] [C2] response complete
Aug 02 22:39:45 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:45.496][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:151] [C2] destroying stream: 0 remaining
Aug 02 22:39:49 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:49.985][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:39:54 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:39:54.995][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.006][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][http] [source/common/http/conn_manager_impl.cc:225] [C1] new stream
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][http] [source/common/http/conn_manager_impl.cc:837] [C1][S16746810938224782172] request headers complete (end_stream=true):
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/metrics'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][http] [source/common/http/filter_manager.cc:721] [C1][S16746810938224782172] request end stream
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][router] [source/common/router/router.cc:429] [C1][S16746810938224782172] cluster 'self_admin' match for URL '/metrics'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][router] [source/common/router/router.cc:586] [C1][S16746810938224782172] router decoding headers:
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':scheme', 'http'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', 'a7aede95-8159-487a-a389-f6c166da1aba'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:174] [C2] using existing connection
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:126] [C2] creating stream
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][17][debug][router] [source/common/router/upstream_request.cc:362] [C1][S16746810938224782172] pool ready
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][1][debug][http] [source/common/http/conn_manager_impl.cc:225] [C3] new stream
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][1][debug][http] [source/common/http/conn_manager_impl.cc:837] [C3][S5388511821129317748] request headers complete (end_stream=true):
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', 'a7aede95-8159-487a-a389-f6c166da1aba'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-length', '0'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][1][debug][http] [source/common/http/filter_manager.cc:721] [C3][S5388511821129317748] request end stream
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.482][1][debug][admin] [source/server/admin/admin_filter.cc:66] [C3][S5388511821129317748] request complete: path: /stats/prometheus
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.491][1][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C3][S5388511821129317748] encoding headers via codec (end_stream=false):
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:40:00 GMT'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.491][17][debug][router] [source/common/router/router.cc:1178] [C1][S16746810938224782172] upstream headers complete: end_stream=false
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.492][17][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C1][S16746810938224782172] encoding headers via codec (end_stream=false):
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:40:00 GMT'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-upstream-service-time', '9'
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.492][17][debug][client] [source/common/http/codec_client.cc:109] [C2] response complete
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.492][17][debug][pool] [source/common/http/http1/conn_pool.cc:50] [C2] response complete
Aug 02 22:40:00 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:00.492][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:151] [C2] destroying stream: 0 remaining
Aug 02 22:40:05 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:05.016][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:10 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:10.026][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.035][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.482][17][debug][http] [source/common/http/conn_manager_impl.cc:225] [C1] new stream
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][http] [source/common/http/conn_manager_impl.cc:837] [C1][S12170500255419954415] request headers complete (end_stream=true):
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/metrics'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][http] [source/common/http/filter_manager.cc:721] [C1][S12170500255419954415] request end stream
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][router] [source/common/router/router.cc:429] [C1][S12170500255419954415] cluster 'self_admin' match for URL '/metrics'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][router] [source/common/router/router.cc:586] [C1][S12170500255419954415] router decoding headers:
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':scheme', 'http'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', 'e76da251-1b34-44c4-87fd-4711c3842afe'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:174] [C2] using existing connection
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:126] [C2] creating stream
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][17][debug][router] [source/common/router/upstream_request.cc:362] [C1][S12170500255419954415] pool ready
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][1][debug][http] [source/common/http/conn_manager_impl.cc:225] [C3] new stream
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][1][debug][http] [source/common/http/conn_manager_impl.cc:837] [C3][S12125580656725570242] request headers complete (end_stream=true):
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':authority', '10.20.212.252:9102'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':path', '/stats/prometheus'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':method', 'GET'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'user-agent', 'Prometheus/2.26.0'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept', 'application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'accept-encoding', 'gzip'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-prometheus-scrape-timeout-seconds', '10.000000'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-forwarded-proto', 'http'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-request-id', 'e76da251-1b34-44c4-87fd-4711c3842afe'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-expected-rq-timeout-ms', '15000'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-original-path', '/metrics'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-length', '0'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][1][debug][http] [source/common/http/filter_manager.cc:721] [C3][S12125580656725570242] request end stream
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.483][1][debug][admin] [source/server/admin/admin_filter.cc:66] [C3][S12125580656725570242] request complete: path: /stats/prometheus
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.491][1][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C3][S12125580656725570242] encoding headers via codec (end_stream=false):
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:40:15 GMT'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.492][17][debug][router] [source/common/router/router.cc:1178] [C1][S12170500255419954415] upstream headers complete: end_stream=false
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.492][17][debug][http] [source/common/http/conn_manager_impl.cc:1452] [C1][S12170500255419954415] encoding headers via codec (end_stream=false):
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: ':status', '200'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'content-type', 'text/plain; charset=UTF-8'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'cache-control', 'no-cache, max-age=0'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-content-type-options', 'nosniff'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'date', 'Mon, 02 Aug 2021 22:40:15 GMT'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'server', 'envoy'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: 'x-envoy-upstream-service-time', '9'
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.492][17][debug][client] [source/common/http/codec_client.cc:109] [C2] response complete
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.492][17][debug][pool] [source/common/http/http1/conn_pool.cc:50] [C2] response complete
Aug 02 22:40:15 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:15.492][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:151] [C2] destroying stream: 0 remaining
Aug 02 22:40:20 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:20.046][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:25 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:25.055][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:30.066][1][debug][main] [source/server/server.cc:190] flushing stats
Aug 02 22:40:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:30.483][17][debug][http] [source/common/http/conn_manager_impl.cc:225] [C1] new stream
Aug 02 22:40:30 ip-10-20-212-252.us-west-2.compute.internal docker[10380]: [2021-08-02 22:40:30.483][17][debug][http] [source/common/http/conn_manager_impl.cc:837] [C1][S15472970824146228865] request headers complete (end_stream=true):
d

@jkirschner-hashicorp jkirschner-hashicorp removed the waiting-reply Waiting on response from Original Poster or another individual in the thread label Aug 9, 2021
@jkirschner-hashicorp
Copy link
Contributor

Hi @dschaaff :

Thank you for providing that information. We don't see any obvious issues in what you're provided so far, so we'd need some additional information to explore further:

  • Scrubbed Envoy config dump after Envoy is booting up as a sidecar proxy
  • Consul logs after startup from the server and client agents

While we don't expect the following will change anything for you, we think it might be worth a try: you could disable the streaming backend on agents by setting use_streaming_backend to false (so that traditional blocking queries are used instead) and see if anything changes. Defaulting to use of the streaming backend is something that changed between your two versions (1.9.5 and 1.10.1), which we don't expect to be related to what you are seeing, but could be.

@dschaaff
Copy link
Author

dschaaff commented Aug 9, 2021

@jkirschner-hashicorp

I disabled streaming on both servers and agents and did not see any change in behavior.

I'm attaching several files

  • config dump of envoy bootstrapped by consul agent 1.10.1
  • config dump of envoy bootstrapped by consul agent 1.9.5 from the same node as above
  • logs of consul agent 1.10.1 after service start
  • logs of 1.10.1 server after service start

Let me know if you need any information.

Appreciate it!

consul1.10_envoy_config_dump.txt
consul_1.9_envoy_dump.txt
consul-agent-1.10-log.txt
consul-1.10-server.log

@dschaaff dschaaff closed this as completed Aug 9, 2021
@dschaaff dschaaff reopened this Aug 9, 2021
@dschaaff
Copy link
Author

dschaaff commented Aug 10, 2021

I should have done this sooner, but I went ahead and tested the behavior with agent versions 1.9.6, 1.9.7, and 1.9.8. Each of those 3 versions presents the same behavior as described in the ticket. No envoy upstream listeners are configured.

Once I downgrade to 1.9.5 everything starts working as expected. I've reviewed the changelogs and nothing jumps out at me as a major change. I'd appreciate any guidance on next steps to figure this out.

@dschaaff
Copy link
Author

dschaaff commented Aug 10, 2021

Ok, I am making progress. Envoy listeners are configured correctly if my sidecar service definition includes an acl token.

{
  "service": {
    "name": "ui-sidecar-proxy",
    "id": "ui-sidecar-proxy",
    "port": 20000,
    "address": "10.20.203.196",
    "kind": "connect-proxy",
    "token": "redacted",
    "proxy": {
      "destination_service_id": "ui",
      "destination_service_name": "ui",
      "local_service_address": "127.0.0.1",
      "local_service_port": 80,
      "mode": "direct",
      "upstreams": [
        {
          "destination_name": "upstream-1",
          "destination_type": "service",
          "local_bind_port": 2000
        },
        {
          "destination_name": "upstream-2",
          "destination_type": "service",
          "local_bind_port": 3000
        }
      ]
    },
    "meta": {
      "availability_zone": "us-west-2a"
    },
    "checks": [
      {
        "deregister_critical_service_after": "10m",
        "interval": "10s",
        "name": "Proxy Public Listener",
        "tcp": "10.20.203.196:20000"
      },
      {
        "alias_service": "ui",
        "name": "Destination Alias"
      }
    ],
  }
}

The clue was the RPC permission denied errors in the log. We previously did not include a token in the sidecar service definition. The service was registered using the consul agent's default token which had service:write permission. The envoy config was then bootstrapped with a dedicated acl token with permission scoped to the service.

service "admin-ui" {
 policy = "write"
}
service "admin-ui-sidecar-proxy" {
	policy = "write"
}
service_prefix "" {
	policy = "read"
}
node_prefix "" {
	policy = "read"
}

See the systemd unit file in my original comment for where that token is passed. This setup worked until agent version 1.9.6. Starting with that version I have to have the token in the sidecar service registration for envoy to be configured correctly. This is also the reason are k8s injected pods were not configuring envoy correctly. The consul-k8s service template does not include the token field https://github.com/hashicorp/consul-k8s/blob/v0.25.0/connect-inject/container_init.go#L320

 services {
        id   = "${SERVICE_ID}"
        name = "admin-ui"
        address = "${POD_IP}"
        port = 80
        meta = {
          namespace = "admin-ui"
          pod-name = "${POD_NAME}"
          k8s-namespace = "${POD_NAMESPACE}"
        }
      }

      services {
        id   = "${PROXY_SERVICE_ID}"
        name = "admin-ui-sidecar-proxy"
        kind = "connect-proxy"
        address = "${POD_IP}"
        port = 20000
        meta = {
          availability_zone = "us-west-2a"
          namespace = "admin-ui"
          pod-name = "${POD_NAME}"
          k8s-namespace = "${POD_NAMESPACE}"
        }

Is this expected behavior or a bug?

@dschaaff
Copy link
Author

this behavior possibly changed due to #10188

@dnephin
Copy link
Contributor

dnephin commented Aug 10, 2021

Thanks for tracking that down! It does seem likely that #10188 is related based on what you found. To help debug the problem, can you tell me more about which of these tokens (https://www.consul.io/docs/security/acl/acl-system#configuring-acls) you have configured?

You mentioned the acl.tokens.default is configured with service:write. Do you also have an acl.tokens.agent configured, and if so which permissions does it have?

That PR (and #9683 which proceeded it) should only have changed the behaviour for de-registering services, but it sounds like that is not the case.

If you have the full error text of the RPC permission denied errors, that would also be helpful to debug.

@dschaaff
Copy link
Author

dschaaff commented Aug 10, 2021

This is my acl config on the ec2 based nodes

    "acl": {
        "default_policy": "deny",
        "down_policy": "extend-cache",
        "enable_token_persistence": true,
        "enabled": true,
        "token_ttl": "30s",
        "tokens": {
            "agent": "redacted",
            "default": "redacted"
        }
    },

acl.tokens.default and acl.tokens.agent are both set the same token. That token has the following acl policy attached

node_prefix "" {
   policy = "write"
}
service_prefix "" {
   policy = "read"
}
service_prefix "" {
  policy = "write"
}
service "" {
  policy = "write"
}

I can successfully register the service and the proxy sidecar with the local agent without specifying a token in the sidecar service definition.

 curl -s http://localhost:8500/v1/agent/services | python -m json.tool
{
    "admin-ui": {
        "Address": "10.20.212.252",
        "Datacenter": "stg-datacenter",
        "EnableTagOverride": false,
        "ID": "admin-ui",
        "Port": 0,
        "Service": "admin-ui",
        "SocketPath": "",
        "TaggedAddresses": {
            "lan_ipv4": {
                "Address": "10.20.212.252",
                "Port": 0
            },
            "wan_ipv4": {
                "Address": "10.20.212.252",
                "Port": 0
            }
        },
        "Weights": {
            "Passing": 1,
            "Warning": 1
        }
    },
    "admin-ui-sidecar-proxy": {
        "Address": "10.20.212.252",
        "Datacenter": "stg-datacenter",
        "EnableTagOverride": false,
        "ID": "admin-ui-sidecar-proxy",
        "Kind": "connect-proxy",
        "Port": 20000,
        "Proxy": {
            "Config": {
                "envoy_prometheus_bind_addr": "0.0.0.0:9102",
                "protocol": "http"
            },
            "DestinationServiceID": "admin-ui",
            "DestinationServiceName": "admin-ui",
            "Expose": {},
            "LocalServiceAddress": "127.0.0.1",
            "LocalServicePort": 80,
            "MeshGateway": {},
            "Upstreams": [
                {
                    "Config": {
                        "protocol": "http"
                    },
                    "DestinationName": "service-a",
                    "DestinationType": "service",
                    "LocalBindPort": 1000,
                    "MeshGateway": {}
                },
                {
                    "Config": {
                        "protocol": "http"
                    },
                    "DestinationName": "service-b",
                    "DestinationType": "service",
                    "LocalBindPort": 2000,
                    "MeshGateway": {}
                },
                {
                    "Config": {
                        "protocol": "http"
                    },
                    "DestinationName": "service-c",
                    "DestinationType": "service",
                    "LocalBindPort": 3000,
                    "MeshGateway": {}
                },
                {
                    "Config": {
                        "protocol": "http"
                    },
                    "DestinationName": "service-d",
                    "DestinationType": "service",
                    "LocalBindPort": 4000,
                    "MeshGateway": {}
                },
                {
                    "Config": {
                        "protocol": "http"
                    },
                    "DestinationName": "service-e",
                    "DestinationType": "service",
                    "LocalBindPort": 5500,
                    "MeshGateway": {}
                }
            ]
        },
        "Service": "admin-ui-sidecar-proxy",
        "SocketPath": "",
        "TaggedAddresses": {
            "lan_ipv4": {
                "Address": "10.20.212.252",
                "Port": 20000
            },
            "wan_ipv4": {
                "Address": "10.20.212.252",
                "Port": 20000
            }
        },
,
        "Weights": {
            "Passing": 1,
            "Warning": 1
        }
    }
}

The problem is when I bootstrap the envoy config and then start envoy. At that point, the logs are flooded with (see #10714 (comment) for full log file)

aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.569z [error] agent.proxycfg: failed to handle update from watch: service_id=admin-ui-sidecar-proxy id=discovery-chain:service-a error="error filling agent cache: rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.570z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.209.189:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.571z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.202.159:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.571z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.208.112:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.634z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.209.189:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.719z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.202.159:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.976z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.208.112:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.976z [warn]  agent.cache: handling error in cache.notify: cache-type=compiled-discovery-chain error="rpc error making call: permission denied" index=0
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.976z [error] agent.proxycfg: failed to handle update from watch: service_id=admin-ui-sidecar-proxy id=discovery-chain:service-b error="error filling agent cache: rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.976z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.209.189:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.977z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.202.159:8300 error="rpc error making call: permission denied"
aug 09 22:48:53 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:53.978z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.208.112:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.152z [error] agent.client: rpc failed to server: method=intention.match server=10.20.209.189:8300 error="rpc error making call: rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.158z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.202.159:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.403z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.208.112:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.447z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.209.189:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.484z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.202.159:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.655z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.208.112:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.712z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.209.189:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.792z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.202.159:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.867z [error] agent.client: rpc failed to server: method=discoverychain.get server=10.20.208.112:8300 error="rpc error making call: permission denied"
aug 09 22:48:54 ip-10-20-212-252.us-west-2.compute.internal consul[14050]: 2021-08-09t22:48:54.867z [warn]  agent.cache: handling error in cache.notify: 

Envoy is configured with it's own acl token with the policy. I have confirmed this token is correctly set in the envoy bootstrap config x-consul-token field.

service "admin-ui" {
 policy = "write"
}
service "admin-ui-sidecar-proxy" {
	policy = "write"
}
service_prefix "" {
	policy = "read"
}
node_prefix "" {
	policy = "read"
}

If the token that envoy is using is added to the sidecar service definition then the permission errors go away and envoy is configured as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
theme/envoy/xds Related to Envoy support type/bug Feature does not function as expected
Projects
None yet
3 participants