Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HPE_INVALID_METHOD while using SSL passthrough #7540

Closed
davidamin opened this issue Jul 11, 2019 · 2 comments
Closed

HPE_INVALID_METHOD while using SSL passthrough #7540

davidamin opened this issue Jul 11, 2019 · 2 comments
Labels
question Questions that are neither investigations, bugs, nor enhancements

Comments

@davidamin
Copy link

HPE_INVALID_METHOD while using SSL passthrough

Description:
I am attempting to use Envoy to pass traffic to various clusters based on SNI. I followed How do I setup SNI? but omitted the tls_context configuration as I do not want to terminate TLS in Envoy. When making requests, I get ERR_SSL_PROTOCOL_ERROR from my browser, and I see dispatch error: http/1.1 protocol error: HPE_INVALID_METHOD in the Envoy logs. I would expect the traffic to be routed to the appropriate cluster.

Looking at the request in wireshark, I believe I'm getting a plain HTTP 400 as a response to the client hello, but I'm not sure why. If I add the appropriate certificates in the tls_context configuration block, everything works as expected. If I make a direct request to my server, it also succeeds. Am I misunderstanding the functionality described in #1843 perhaps?

Repro steps:

  • Run the envoyproxy/envoy-dev:latest Docker image with the config below linked in and passed as the config param (something like sudo docker run --rm -it -p 443:443 -p 9901:9901 -v /path/to/config:/config envoyproxy/envoy-dev:latest -c /config/config.yaml -l debug).
  • Make an HTTPS request on 443 with the appropriate domain (curl -v https://www.foo.bar has been my test, or a call via Chrome)

Admin and Stats Output:

access_log_file.flushed_by_timer: 0
access_log_file.reopen_failed: 0
access_log_file.write_buffered: 0
access_log_file.write_completed: 0
access_log_file.write_total_buffered: 0
cluster.service_foo.assignment_stale: 0
cluster.service_foo.assignment_timeout_received: 0
cluster.service_foo.bind_errors: 0
cluster.service_foo.circuit_breakers.default.cx_open: 0
cluster.service_foo.circuit_breakers.default.cx_pool_open: 0
cluster.service_foo.circuit_breakers.default.rq_open: 0
cluster.service_foo.circuit_breakers.default.rq_pending_open: 0
cluster.service_foo.circuit_breakers.default.rq_retry_open: 0
cluster.service_foo.circuit_breakers.high.cx_open: 0
cluster.service_foo.circuit_breakers.high.cx_pool_open: 0
cluster.service_foo.circuit_breakers.high.rq_open: 0
cluster.service_foo.circuit_breakers.high.rq_pending_open: 0
cluster.service_foo.circuit_breakers.high.rq_retry_open: 0
cluster.service_foo.client_ssl_socket_factory.downstream_context_secrets_not_ready: 0
cluster.service_foo.client_ssl_socket_factory.ssl_context_update_by_sds: 0
cluster.service_foo.client_ssl_socket_factory.upstream_context_secrets_not_ready: 0
cluster.service_foo.lb_healthy_panic: 0
cluster.service_foo.lb_local_cluster_not_ok: 0
cluster.service_foo.lb_recalculate_zone_structures: 0
cluster.service_foo.lb_subsets_active: 0
cluster.service_foo.lb_subsets_created: 0
cluster.service_foo.lb_subsets_fallback: 0
cluster.service_foo.lb_subsets_fallback_panic: 0
cluster.service_foo.lb_subsets_removed: 0
cluster.service_foo.lb_subsets_selected: 0
cluster.service_foo.lb_zone_cluster_too_small: 0
cluster.service_foo.lb_zone_no_capacity_left: 0
cluster.service_foo.lb_zone_number_differs: 0
cluster.service_foo.lb_zone_routing_all_directly: 0
cluster.service_foo.lb_zone_routing_cross_zone: 0
cluster.service_foo.lb_zone_routing_sampled: 0
cluster.service_foo.max_host_weight: 0
cluster.service_foo.membership_change: 1
cluster.service_foo.membership_degraded: 0
cluster.service_foo.membership_excluded: 0
cluster.service_foo.membership_healthy: 1
cluster.service_foo.membership_total: 1
cluster.service_foo.original_dst_host_invalid: 0
cluster.service_foo.retry_or_shadow_abandoned: 0
cluster.service_foo.ssl.connection_error: 0
cluster.service_foo.ssl.fail_verify_cert_hash: 0
cluster.service_foo.ssl.fail_verify_error: 0
cluster.service_foo.ssl.fail_verify_no_cert: 0
cluster.service_foo.ssl.fail_verify_san: 0
cluster.service_foo.ssl.handshake: 0
cluster.service_foo.ssl.no_certificate: 0
cluster.service_foo.ssl.session_reused: 0
cluster.service_foo.update_attempt: 4
cluster.service_foo.update_empty: 0
cluster.service_foo.update_failure: 0
cluster.service_foo.update_no_rebuild: 0
cluster.service_foo.update_success: 4
cluster.service_foo.upstream_cx_active: 0
cluster.service_foo.upstream_cx_close_notify: 0
cluster.service_foo.upstream_cx_connect_attempts_exceeded: 0
cluster.service_foo.upstream_cx_connect_fail: 0
cluster.service_foo.upstream_cx_connect_timeout: 0
cluster.service_foo.upstream_cx_destroy: 0
cluster.service_foo.upstream_cx_destroy_local: 0
cluster.service_foo.upstream_cx_destroy_local_with_active_rq: 0
cluster.service_foo.upstream_cx_destroy_remote: 0
cluster.service_foo.upstream_cx_destroy_remote_with_active_rq: 0
cluster.service_foo.upstream_cx_destroy_with_active_rq: 0
cluster.service_foo.upstream_cx_http1_total: 0
cluster.service_foo.upstream_cx_http2_total: 0
cluster.service_foo.upstream_cx_idle_timeout: 0
cluster.service_foo.upstream_cx_max_requests: 0
cluster.service_foo.upstream_cx_none_healthy: 0
cluster.service_foo.upstream_cx_overflow: 0
cluster.service_foo.upstream_cx_pool_overflow: 0
cluster.service_foo.upstream_cx_protocol_error: 0
cluster.service_foo.upstream_cx_rx_bytes_buffered: 0
cluster.service_foo.upstream_cx_rx_bytes_total: 0
cluster.service_foo.upstream_cx_total: 0
cluster.service_foo.upstream_cx_tx_bytes_buffered: 0
cluster.service_foo.upstream_cx_tx_bytes_total: 0
cluster.service_foo.upstream_flow_control_backed_up_total: 0
cluster.service_foo.upstream_flow_control_drained_total: 0
cluster.service_foo.upstream_flow_control_paused_reading_total: 0
cluster.service_foo.upstream_flow_control_resumed_reading_total: 0
cluster.service_foo.upstream_internal_redirect_failed_total: 0
cluster.service_foo.upstream_internal_redirect_succeeded_total: 0
cluster.service_foo.upstream_rq_active: 0
cluster.service_foo.upstream_rq_cancelled: 0
cluster.service_foo.upstream_rq_completed: 0
cluster.service_foo.upstream_rq_maintenance_mode: 0
cluster.service_foo.upstream_rq_pending_active: 0
cluster.service_foo.upstream_rq_pending_failure_eject: 0
cluster.service_foo.upstream_rq_pending_overflow: 0
cluster.service_foo.upstream_rq_pending_total: 0
cluster.service_foo.upstream_rq_per_try_timeout: 0
cluster.service_foo.upstream_rq_retry: 0
cluster.service_foo.upstream_rq_retry_overflow: 0
cluster.service_foo.upstream_rq_retry_success: 0
cluster.service_foo.upstream_rq_rx_reset: 0
cluster.service_foo.upstream_rq_timeout: 0
cluster.service_foo.upstream_rq_total: 0
cluster.service_foo.upstream_rq_tx_reset: 0
cluster.service_foo.version: 0
cluster_manager.active_clusters: 1
cluster_manager.cluster_added: 1
cluster_manager.cluster_modified: 0
cluster_manager.cluster_removed: 0
cluster_manager.cluster_updated: 0
cluster_manager.cluster_updated_via_merge: 0
cluster_manager.update_merge_cancelled: 0
cluster_manager.update_out_of_merge_window: 0
cluster_manager.warming_clusters: 0
http.admin.downstream_cx_active: 1
http.admin.downstream_cx_delayed_close_timeout: 0
http.admin.downstream_cx_destroy: 4
http.admin.downstream_cx_destroy_active_rq: 0
http.admin.downstream_cx_destroy_local: 4
http.admin.downstream_cx_destroy_local_active_rq: 0
http.admin.downstream_cx_destroy_remote: 0
http.admin.downstream_cx_destroy_remote_active_rq: 0
http.admin.downstream_cx_drain_close: 0
http.admin.downstream_cx_http1_active: 1
http.admin.downstream_cx_http1_total: 5
http.admin.downstream_cx_http2_active: 0
http.admin.downstream_cx_http2_total: 0
http.admin.downstream_cx_idle_timeout: 0
http.admin.downstream_cx_overload_disable_keepalive: 0
http.admin.downstream_cx_protocol_error: 4
http.admin.downstream_cx_rx_bytes_buffered: 859
http.admin.downstream_cx_rx_bytes_total: 2610
http.admin.downstream_cx_ssl_active: 0
http.admin.downstream_cx_ssl_total: 0
http.admin.downstream_cx_total: 5
http.admin.downstream_cx_tx_bytes_buffered: 0
http.admin.downstream_cx_tx_bytes_total: 264
http.admin.downstream_cx_upgrades_active: 0
http.admin.downstream_cx_upgrades_total: 0
http.admin.downstream_flow_control_paused_reading_total: 0
http.admin.downstream_flow_control_resumed_reading_total: 0
http.admin.downstream_rq_1xx: 0
http.admin.downstream_rq_2xx: 0
http.admin.downstream_rq_3xx: 0
http.admin.downstream_rq_4xx: 0
http.admin.downstream_rq_5xx: 0
http.admin.downstream_rq_active: 1
http.admin.downstream_rq_completed: 0
http.admin.downstream_rq_http1_total: 1
http.admin.downstream_rq_http2_total: 0
http.admin.downstream_rq_idle_timeout: 0
http.admin.downstream_rq_non_relative_path: 0
http.admin.downstream_rq_overload_close: 0
http.admin.downstream_rq_response_before_rq_complete: 0
http.admin.downstream_rq_rx_reset: 0
http.admin.downstream_rq_timeout: 0
http.admin.downstream_rq_too_large: 0
http.admin.downstream_rq_total: 1
http.admin.downstream_rq_tx_reset: 0
http.admin.downstream_rq_ws_on_non_ws_route: 0
http.admin.rs_too_large: 0
http.async-client.no_cluster: 0
http.async-client.no_route: 0
http.async-client.rq_direct_response: 0
http.async-client.rq_redirect: 0
http.async-client.rq_reset_after_downstream_response_started: 0
http.async-client.rq_total: 0
http.ingress_http.downstream_cx_active: 0
http.ingress_http.downstream_cx_delayed_close_timeout: 0
http.ingress_http.downstream_cx_destroy: 3
http.ingress_http.downstream_cx_destroy_active_rq: 0
http.ingress_http.downstream_cx_destroy_local: 0
http.ingress_http.downstream_cx_destroy_local_active_rq: 0
http.ingress_http.downstream_cx_destroy_remote: 3
http.ingress_http.downstream_cx_destroy_remote_active_rq: 0
http.ingress_http.downstream_cx_drain_close: 0
http.ingress_http.downstream_cx_http1_active: 0
http.ingress_http.downstream_cx_http1_total: 3
http.ingress_http.downstream_cx_http2_active: 0
http.ingress_http.downstream_cx_http2_total: 0
http.ingress_http.downstream_cx_idle_timeout: 0
http.ingress_http.downstream_cx_overload_disable_keepalive: 0
http.ingress_http.downstream_cx_protocol_error: 3
http.ingress_http.downstream_cx_rx_bytes_buffered: 0
http.ingress_http.downstream_cx_rx_bytes_total: 1234
http.ingress_http.downstream_cx_ssl_active: 0
http.ingress_http.downstream_cx_ssl_total: 0
http.ingress_http.downstream_cx_total: 3
http.ingress_http.downstream_cx_tx_bytes_buffered: 0
http.ingress_http.downstream_cx_tx_bytes_total: 198
http.ingress_http.downstream_cx_upgrades_active: 0
http.ingress_http.downstream_cx_upgrades_total: 0
http.ingress_http.downstream_flow_control_paused_reading_total: 0
http.ingress_http.downstream_flow_control_resumed_reading_total: 0
http.ingress_http.downstream_rq_1xx: 0
http.ingress_http.downstream_rq_2xx: 0
http.ingress_http.downstream_rq_3xx: 0
http.ingress_http.downstream_rq_4xx: 0
http.ingress_http.downstream_rq_5xx: 0
http.ingress_http.downstream_rq_active: 0
http.ingress_http.downstream_rq_completed: 0
http.ingress_http.downstream_rq_http1_total: 0
http.ingress_http.downstream_rq_http2_total: 0
http.ingress_http.downstream_rq_idle_timeout: 0
http.ingress_http.downstream_rq_non_relative_path: 0
http.ingress_http.downstream_rq_overload_close: 0
http.ingress_http.downstream_rq_response_before_rq_complete: 0
http.ingress_http.downstream_rq_rx_reset: 0
http.ingress_http.downstream_rq_timeout: 0
http.ingress_http.downstream_rq_too_large: 0
http.ingress_http.downstream_rq_total: 0
http.ingress_http.downstream_rq_tx_reset: 0
http.ingress_http.downstream_rq_ws_on_non_ws_route: 0
http.ingress_http.rs_too_large: 0
http.ingress_http.tracing.client_enabled: 0
http.ingress_http.tracing.health_check: 0
http.ingress_http.tracing.not_traceable: 0
http.ingress_http.tracing.random_sampling: 0
http.ingress_http.tracing.service_forced: 0
listener.0.0.0.0_443.downstream_cx_active: 0
listener.0.0.0.0_443.downstream_cx_destroy: 3
listener.0.0.0.0_443.downstream_cx_total: 3
listener.0.0.0.0_443.downstream_pre_cx_active: 0
listener.0.0.0.0_443.downstream_pre_cx_timeout: 0
listener.0.0.0.0_443.http.ingress_http.downstream_rq_1xx: 0
listener.0.0.0.0_443.http.ingress_http.downstream_rq_2xx: 0
listener.0.0.0.0_443.http.ingress_http.downstream_rq_3xx: 0
listener.0.0.0.0_443.http.ingress_http.downstream_rq_4xx: 0
listener.0.0.0.0_443.http.ingress_http.downstream_rq_5xx: 0
listener.0.0.0.0_443.http.ingress_http.downstream_rq_completed: 0
listener.0.0.0.0_443.no_filter_chain_match: 0
listener.admin.downstream_cx_active: 1
listener.admin.downstream_cx_destroy: 4
listener.admin.downstream_cx_total: 5
listener.admin.downstream_pre_cx_active: 0
listener.admin.downstream_pre_cx_timeout: 0
listener.admin.http.admin.downstream_rq_1xx: 0
listener.admin.http.admin.downstream_rq_2xx: 0
listener.admin.http.admin.downstream_rq_3xx: 0
listener.admin.http.admin.downstream_rq_4xx: 0
listener.admin.http.admin.downstream_rq_5xx: 0
listener.admin.http.admin.downstream_rq_completed: 0
listener.admin.no_filter_chain_match: 0
listener_manager.listener_added: 1
listener_manager.listener_create_failure: 0
listener_manager.listener_create_success: 4
listener_manager.listener_modified: 0
listener_manager.listener_removed: 0
listener_manager.total_listeners_active: 1
listener_manager.total_listeners_draining: 0
listener_manager.total_listeners_warming: 0
runtime.admin_overrides_active: 0
runtime.deprecated_feature_use: 0
runtime.load_error: 0
runtime.load_success: 1
runtime.num_keys: 0
runtime.num_layers: 2
runtime.override_dir_exists: 0
runtime.override_dir_not_exists: 1
server.concurrency: 4
server.days_until_first_cert_expiring: 2147483647
server.debug_assertion_failures: 0
server.hot_restart_epoch: 0
server.live: 1
server.memory_allocated: 2036352
server.memory_heap_size: 3145728
server.parent_connections: 0
server.state: 1
server.total_connections: 0
server.uptime: 15
server.version: 12192602
server.watchdog_mega_miss: 0
server.watchdog_miss: 0
tls_inspector.alpn_found: 3
tls_inspector.alpn_not_found: 0
tls_inspector.client_hello_too_large: 0
tls_inspector.connection_closed: 0
tls_inspector.read_error: 0
tls_inspector.sni_found: 3
tls_inspector.sni_not_found: 0
tls_inspector.tls_found: 3
tls_inspector.tls_not_found: 0
cluster.service_foo.upstream_cx_connect_ms: No recorded values
cluster.service_foo.upstream_cx_length_ms: No recorded values
http.admin.downstream_cx_length_ms: P0(0,0) P25(0,0) P50(3.05,3.05) P75(3.1,3.1) P90(4.06,4.06) P95(4.08,4.08) P99(4.096,4.096) P99.5(4.098,4.098) P99.9(4.0996,4.0996) P100(4.1,4.1)
http.admin.downstream_rq_time: No recorded values
http.ingress_http.downstream_cx_length_ms: P0(nan,1) P25(nan,1.075) P50(nan,6.05) P75(nan,9.025) P90(nan,9.07) P95(nan,9.085) P99(nan,9.097) P99.5(nan,9.0985) P99.9(nan,9.0997) P100(nan,9.1)
http.ingress_http.downstream_rq_time: No recorded values
listener.0.0.0.0_443.downstream_cx_length_ms: P0(nan,1) P25(nan,1.075) P50(nan,5.05) P75(nan,8.025) P90(nan,8.07) P95(nan,8.085) P99(nan,8.097) P99.5(nan,8.0985) P99.9(nan,8.0997) P100(nan,8.1)
listener.admin.downstream_cx_length_ms: P0(0,0) P25(0,0) P50(3.03333,3.03333) P75(3.06667,3.06667) P90(3.08667,3.08667) P95(3.09333,3.09333) P99(3.09867,3.09867) P99.5(3.09933,3.09933) P99.9(3.09987,3.09987) P100(3.1,3.1)
server.initialization_time_ms: P0(nan,10) P25(nan,10.25) P50(nan,10.5) P75(nan,10.75) P90(nan,10.9) P95(nan,10.95) P99(nan,10.99) P99.5(nan,10.995) P99.9(nan,10.999) P100(nan,11)

Config:

---
admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 443 }
    listener_filters:
    - name: "envoy.listener.tls_inspector"
      typed_config: {}
    filter_chains:
    - filter_chain_match:
        server_names: ["www.foo.bar"]
      filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          stat_prefix: ingress_http
          route_config:
            virtual_hosts:
            - name: default
              domains: "*"
              routes:
              - match: { prefix: "/" }
                route: { cluster: service_foo }

  clusters:
  - name: service_foo
    connect_timeout: 0.25s
    type: LOGICAL_DNS
    # Comment out the following line to test on v6 networks
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: service_foo
      endpoints:
      - lb_endpoints:
          - endpoint:
              address:
                socket_address:
                    address: www.foo.bar
                    port_value: 8443
    tls_context:
      sni: www.foo.bar

I've removed the domain for privacy concerns, but foo.bar in the example is replaced by a domain that resolves to 127.0.0.1. Requests to https://www.foo.bar:8443 are successful.

Logs:

[2019-07-11 14:00:07.357][12][debug][filter] [source/extensions/filters/listener/tls_inspector/tls_inspector.cc:72] tls inspector: new connection accepted
[2019-07-11 14:00:07.358][13][debug][main] [source/server/connection_handler_impl.cc:80] [C1] adding to cleanup list
[2019-07-11 14:00:07.358][14][debug][main] [source/server/connection_handler_impl.cc:80] [C0] adding to cleanup list
[2019-07-11 14:00:07.358][12][debug][filter] [source/extensions/filters/listener/tls_inspector/tls_inspector.cc:118] tls:onServerName(), requestedServerName: www.foo.bar
[2019-07-11 14:00:07.359][12][debug][main] [source/server/connection_handler_impl.cc:280] [C2] new connection
[2019-07-11 14:00:07.360][12][debug][http] [source/common/http/conn_manager_impl.cc:281] [C2] dispatch error: http/1.1 protocol error: HPE_INVALID_METHOD
[2019-07-11 14:00:07.360][12][debug][connection] [source/common/network/connection_impl.cc:101] [C2] closing data_to_write=66 type=2
[2019-07-11 14:00:07.360][12][debug][connection] [source/common/network/connection_impl.cc:651] [C2] setting delayed close timer with timeout 1000 ms
[2019-07-11 14:00:07.360][12][debug][connection] [source/common/network/connection_impl.cc:580] [C2] write flush complete
[2019-07-11 14:00:07.361][12][debug][connection] [source/common/network/connection_impl.cc:477] [C2] remote early close
[2019-07-11 14:00:07.361][12][debug][connection] [source/common/network/connection_impl.cc:188] [C2] closing socket: 0
[2019-07-11 14:00:07.361][12][debug][main] [source/server/connection_handler_impl.cc:80] [C2] adding to cleanup list
[2019-07-11 14:00:09.475][1][debug][main] [source/server/server.cc:170] flushing stats
@mattklein123 mattklein123 added the question Questions that are neither investigations, bugs, nor enhancements label Jul 11, 2019
@PiotrSikora
Copy link
Contributor

@davidamin for SSL passthrough, you need to use tcp_proxy, and not http_connection_manager. The payload is encrypted, so there is no HTTP for Envoy to see.

@davidamin
Copy link
Author

Ahh that makes total sense! Thanks so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Questions that are neither investigations, bugs, nor enhancements
Projects
None yet
Development

No branches or pull requests

3 participants