You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
consul --version
Consul v1.4.4
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
consul-esm --version
v0.3.3
Consul members:
Node Address Status Type Build Protocol DC Segment
consul-1 10.10.10.1:8301 alive server 1.4.4 2 main <all>
consul-2 10.10.10.2:8301 alive server 1.4.4 2 main <all>
consul-3 10.10.10.3:8301 alive server 1.4.4 2 main <all>
Consul-2 and consul-3 nodes are set with "start_join" and "retry_join" directives containing first ones IP address, so that Consul nodes could form a cluster. Note the rest of configuration also persists, meaning every node is acting as a server.
Besides Consul itself, each node runs consul-esm service. This is the configuration in use on all nodes:
Finally, wait 1 minute and query for health checks once again. Note interval and timeout settings are absent despite the results.
curl http://consul-1:8500/v1/health/node/my.hardware.device
[{"Node":"my.hardware.device","CheckID":"firstcheck","Name":"firstcheck","Status":"passing","Notes":"","Output":"HTTP GET http://consul.check.node:8081: 200 OK Output: There is a host","ServiceID":"","ServiceName":"","ServiceTags":[],"Definition":{"HTTP":"http://consul.check.node:8081","Header":{"hostname":["my.hardware.device"]},"Method":"GET"},"CreateIndex":19510337,"ModifyIndex":19510342},{"Node":"my.hardware.device","CheckID":"secondcheck","Name":"secondcheck","Status":"critical","Notes":"","Output":"HTTP GET http://consul.check.node:8082: 404 Not Found Output: There is no host","ServiceID":"","ServiceName":"","ServiceTags":[],"Definition":{"HTTP":"http://consul.check.node:8082","Header":{"hostname":["my.hardware.device"]},"Method":"GET"},"CreateIndex":19510337,"ModifyIndex":19510348}]
In fact, checks will be executed with default interval now as seen from the HTTP server log:
Hi @angryp, sincere apologies for the late reply. Thanks so much for the details of your issue.
I was able to reproduce your issue where the custom check definition interval disappears after a health check when using Consul version 1.4.4.
It looks like this issue has been resolved in versions 1.5.0 and onwards. More specifically it looks like it was resolved by this pull request: hashicorp/consul#5553.
In your example, the entities are registered with checks with status value warning. When the health check is performed and the status is changed, this status update is sent to the transaction API which was the source of erasing the interval and timeout values. The issue for the above linked PR describes a similar issue as yours hashicorp/consul#5477
Hello!
Versions in use:
Consul members:
Consul-1 configuration is as follows:
Consul-2 and consul-3 nodes are set with "start_join" and "retry_join" directives containing first ones IP address, so that Consul nodes could form a cluster. Note the rest of configuration also persists, meaning every node is acting as a server.
Besides Consul itself, each node runs consul-esm service. This is the configuration in use on all nodes:
Flags for launching services are:
With this being said, here are instructions to reproduce a bug. First, register a new node with custom intervals.
Secondly, ensure check configuration is correct. Note interval is still correct.
Finally, wait 1 minute and query for health checks once again. Note interval and timeout settings are absent despite the results.
In fact, checks will be executed with default interval now as seen from the HTTP server log:
Let me know if you would require any more information.
The text was updated successfully, but these errors were encountered: