-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support healthcheck
when connect to etcd cluster
#96
Conversation
healthcheck
when connect to etcd clusterhealthcheck
when connect to etcd cluster
healthcheck
when connect to etcd clusterhealthcheck
when connect to etcd cluster
healthcheck
when connect to etcd clusterhealthcheck
when connect to etcd cluster
ping @membphis @spacewander
I'm not sure it's appropriate to write test cases this way. |
I don't have any idea better than yours. |
…duced locally in CI.
…duced locally in CI.
healthcheck
when connect to etcd clusterhealthcheck
when connect to etcd cluster
ping @membphis it'ok. |
I tried the following two test cases, which worked in my local environment, but always failed in CI. === TEST 6: mock tcp connect timeout and recovery, report the node unhealthy and health
--- http_config eval: $::HttpConfig
--- config
location /t {
content_by_lua_block {
local network_isolation_cmd = "export PATH=$PATH:/sbin && iptables -A INPUT -p tcp --dport 12379 -j DROP"
io_opopen(network_isolation_cmd)
ngx.sleep(1)
local etcd, err = require "resty.etcd" .new({
protocol = "v3",
api_prefix = "/v3",
http_host = {
"http://127.0.0.1:12379",
"http://127.0.0.1:22379",
"http://127.0.0.1:32379",
},
user = 'root',
password = 'abc123',
cluster_healthcheck = {
shm_name = 'test_shm',
},
})
local res, err = etcd:set("/healthcheck", "yes")
local network_recovery_cmd = "export PATH=$PATH:/sbin && iptables -D INPUT -p tcp --dport 12379 -j DROP"
io_opopen(network_recovery_cmd)
ngx.sleep(1)
}
}
--- request
GET /t
--- ignore_response
--- error_log eval
[qr/unhealthy TCP increment.*127.0.0.1:12379/,
qr/healthy SUCCESS increment.*127.0.0.1:12379/]
--- timeout: 10
=== TEST 7: mock network partition and recovery, report the node unhealthy and health
--- http_config eval: $::HttpConfig
--- config
location /t {
content_by_lua_block {
io_opopen("export PATH=$PATH:/sbin && iptables -A INPUT -p tcp --dport 22380 -j DROP")
io_opopen("export PATH=$PATH:/sbin && iptables -A INPUT -p tcp --dport 32380 -j DROP")
ngx.sleep(3)
local etcd, err = require "resty.etcd" .new({
protocol = "v3",
api_prefix = "/v3",
http_host = {
"http://127.0.0.1:12379",
"http://127.0.0.1:22379",
"http://127.0.0.1:32379",
},
user = 'root',
password = 'abc123',
cluster_healthcheck = {
shm_name = 'test_shm',
},
})
local res, err = etcd:set("/network/partition", "test")
io_opopen("export PATH=$PATH:/sbin && iptables -D INPUT -p tcp --dport 22380 -j DROP")
io_opopen("export PATH=$PATH:/sbin && iptables -D INPUT -p tcp --dport 32380 -j DROP")
ngx.sleep(5)
}
}
--- request
GET /t
--- timeout: 20
--- ignore_response
--- error_log eval
[qr/unhealthy TCP increment.*127.0.0.1:12379/,
qr/healthy SUCCESS increment.*127.0.0.1:12379/]
|
@membphis @spacewander @nic-chen pls review |
We are busy making a new release. Will take care about this a few days later. |
get |
solve above |
solve, I've made some changes to change the print error log to return |
@membphis @spacewander pls review again |
end | ||
|
||
local err | ||
checker, err = healthcheck.new({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow, I think that is the wrong way.
different etcd instance can create different checker object, the checker object should belong to the etcd instance.
so we can not use a shared checker
for different etcd instance.
fix: #55