-
Notifications
You must be signed in to change notification settings - Fork 16.8k
[stable/redis-ha] haproxy leads to redis cluster service be unavaible #16708
Comments
@jeremyxu2010 I have not been able to reproduce or experience this. Can you check with the latest version of the chart and make sure you purge everything before installing? I remember once having a race condition with the initial pod being too slow to initialize, resulting in some funky issues, but I only had that once and wasn't ever able to replicate it. |
@DandyDeveloper I found the problem. When deploy haproxy with multiple replicas, two different haproxy instances may choose different redis master instance, I guess this problem is caused by haproxy's check interval. Then the wrong redis config The redis config |
This concerns me as it shouldn't happen, they based their master selection from the ping to the Redis instances. It will explicitly look for the master. Is this something you think we need more support for in the chart? As it looks like you solved it. Let me know. If it's more a niche use-case. Please close this issue. Thank you! |
Consider the following scenario:
1. haproxy01 explicitly look for the master, it get master01
2. redis cluster finished fail-over in a short time
3. haproxy02 explicitly look for the master, it get master02
I think even if only multiple clients use sentinel to select the master, the
above problem will still occur,this has nothing to do with whether or not
to use haproxy. Thus redis provides `min-replicas-to-write` config option
to protect redis cluster.
Aaron Layfield <notifications@github.com> 于2019年9月10日周二 上午11:09写道:
… @jeremyxu2010 <https://github.com/jeremyxu2010>
When deploy haproxy with multiple replicas, two different haproxy
instances may choose different redis master instance
This concerns me as it shouldn't happen, they based their master selection
from the ping to the Redis instances.
It will explicitly look for the master.
Is this something you think we need more support for in the chart? As it
looks like you solved it. Let me know. If it's more a niche use-case.
Please close this issue.
Thank you!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#16708?email_source=notifications&email_token=AAFXVWWGEORQP3SK3UFXHCLQI4FYBA5CNFSM4ISKFYM2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6JVOEY#issuecomment-529749779>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAFXVWQ7EMLJENXN4X4FF6DQI4FYBANCNFSM4ISKFYMQ>
.
|
Describe the bug
Latest
stable/redis-ha
add a new function Added HAProxy to support exposed Redis environments,but I use it in production environment, and found it leads to redis cluster service be unavaible, all redis instances becomesrole:slave
Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Which chart:
stable/redis-ha 3.7.2
What happened:
all redis instances becomes
role:slave
What you expected to happen:
one redis instance should become
role:master
How to reproduce it (as minimally and precisely as possible):
test-redis-ha-haproxy:6379
helm install --name test --set haproxy.enabled=true --set haproxy.replicas=3 stable/redis-ha
role:slave
The text was updated successfully, but these errors were encountered: