You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Nomad, to make checks work with Consul Connect, you need to use the export stanza (or set export = true in checks). From my tests, this doesn't actually fully expose the endpoint, it still goes through the Envoy sidecar, but it bypasses the need for mutual TLS and namespace isolation. This lets other services (like Consul healthcheck) reach the endpoint without being in the Consul service mesh.
I've been using this mechanism to get Consul healthcheck working with my services, and according to the documentation, you can expose two different paths (even with different protocols) on the same port in a Consul Connect–enabled service (with transparent_proxy).
That's when the issue occurs: on the Envoy sidecar, the task hook fails to bootstrap and throws an error without any details:
Task hook failed: envoy_bootstrap: error creating bootstrap configuration for Connect proxy sidecar: exit status 1; see: <https://developer.hashicorp.com/nomad/s/envoy-bootstrap-error>
Note that this only happens when you're exposing two different paths using proxy.expose.path as shown in my job file below.
It's weird because the issue seems to occur more often when the service is updating or recycling replicas. I was able to reproduce it when creating a job, but it feels inconsistent, almost like it might be a race condition.
Reproduction steps
Run the job with the definition listed at bottom of this issue:
$ nomad job run healthcheck.nomad.hcl
Expected Result
I expect Envoy to bootstrap correctly and expose both paths on the same port, as described in the documentation.
Actual Result
Envoy fails to bootstrap due to an error in its configuration. I don't have access to the generated configuration right now, so I can't provide further details.
I've reproduced this problem even with a "naked" Consul, without any complex service configurations or additional settings. I'm also not sure whether this issue should be reported to the Nomad or Consul repository, as it appears to be more of a Consul Connect issue rather than a Nomad one.
The text was updated successfully, but these errors were encountered:
Nomad version (client)
Nomad v1.9.6
BuildDate 2025-02-11T18:55:10Z
Revision 7f8b449+CHANGES
Nomad version (server)
Nomad v1.9.5
BuildDate 2025-01-14T18:35:12Z
Revision 0b7bb8b+CHANGES
Issue
In Nomad, to make
checks
work with Consul Connect, you need to use theexport
stanza (or setexport = true
in checks). From my tests, this doesn't actually fully expose the endpoint, it still goes through the Envoy sidecar, but it bypasses the need for mutual TLS and namespace isolation. This lets other services (like Consul healthcheck) reach the endpoint without being in the Consul service mesh.I've been using this mechanism to get Consul healthcheck working with my services, and according to the documentation, you can expose two different paths (even with different protocols) on the same port in a Consul Connect–enabled service (with
transparent_proxy
).That's when the issue occurs: on the Envoy sidecar, the task hook fails to bootstrap and throws an error without any details:
Note that this only happens when you're exposing two different paths using
proxy.expose.path
as shown in my job file below.It's weird because the issue seems to occur more often when the service is updating or recycling replicas. I was able to reproduce it when creating a job, but it feels inconsistent, almost like it might be a race condition.
Reproduction steps
$
nomad job run healthcheck.nomad.hcl
Expected Result
I expect Envoy to bootstrap correctly and expose both paths on the same port, as described in the documentation.
Actual Result
Envoy fails to bootstrap due to an error in its configuration. I don't have access to the generated configuration right now, so I can't provide further details.
Job file
Extras
I've reproduced this problem even with a "naked" Consul, without any complex service configurations or additional settings. I'm also not sure whether this issue should be reported to the Nomad or Consul repository, as it appears to be more of a Consul Connect issue rather than a Nomad one.
The text was updated successfully, but these errors were encountered: