-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
respect_expected_rq_timeout is not leading to expected behavior #17016
Comments
Thanks for the report. Can you send trace logs of this issue? I suspect you're hitting a different timeout somewhere |
Here you go @alyssawilk. They're from docker-compose so the logs from different containers are interleaved, but prefixed by the container name. The server app is configured to respond in ~30s, but the request is timing out after 15s. Sandbox: https://github.com/freddygv/Envoy-respect_expected_rq_timeout-issue/
|
Looks like you're hitting a different timeout �[server-proxy_1 |�[0m [2021-06-21 18:43:25.401][16][debug][http] [source/common/http/filter_manager.cc:883] [C0][S14324767371825436092] Sending local reply with details upstream_response_timeout |
I'm not sure what to make of that Upstream response timeouts are tied to the route timeout: https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/router_filter#x-envoy-upstream-rq-timeout-ms I updated the route config on the server proxy to be 11s and the client requests are now timing out at 11s with the same error/logs. Unless I'm misunderstanding, this seems like the issue that #7358 set out to solve. The client proxy expects a timeout of 120s and the server proxy is not respecting that, in favor of its own route timeout. |
New trace logs after dropping the route timeout on the server proxy to 11s:
Seeing this same pattern where the expected timeout diverges when I would expect them to be the same:
|
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions. |
@snowp does this seem expected based on the issue you originally reported? |
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions. |
Related to #7358
Description:
When configuring
respect_expected_rq_timeout
, the value inx-envoy-expected-rq-timeout-ms
doesn't actually seem like it's being propagated.The docs say:
This makes it seem like if
x-envoy-expected-rq-timeout-ms
is present in an inbound request, the matching outbound request should have the same value.What I expect is that this should work similar to
x-envoy-upstream-rq-timeout-ms
(which I confirmed leads to setting a matching timeout in outboundx-envoy-expected-rq-timeout-ms
headers).Though it's hard to know what's expected, given the negative phrasing in the docs. If set, what does it mean to not ignore the header? and how does the upstream cluster timeout relate to a route timeout?
Repro steps:
docker-compose setup available here: https://github.com/freddygv/Envoy-respect_expected_rq_timeout-issue
Admin and Stats Output:
client:
server:
Config:
client:
server:
Logs:
This is just the snippet showing the inbound request with a timeout of 120s in the header, and the corresponding outbound request with a header showing the default of 15s
This request timed out after 15s, rather than 120s.
The text was updated successfully, but these errors were encountered: