Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/redis-ha] haproxy leads to redis cluster service be unavaible #16708

Closed
jeremyxu2010 opened this issue Aug 30, 2019 · 4 comments · Fixed by #16709
Closed

[stable/redis-ha] haproxy leads to redis cluster service be unavaible #16708

jeremyxu2010 opened this issue Aug 30, 2019 · 4 comments · Fixed by #16709

Comments

@jeremyxu2010
Copy link
Contributor

Describe the bug
Latest stable/redis-ha add a new function Added HAProxy to support exposed Redis environments,but I use it in production environment, and found it leads to redis cluster service be unavaible, all redis instances becomes role:slave

Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Which chart:
stable/redis-ha 3.7.2

What happened:
all redis instances becomes role:slave

What you expected to happen:
one redis instance should become role:master

How to reproduce it (as minimally and precisely as possible):

  1. run multiple programs which try to write redis constantly. These programs access redis service by the address test-redis-ha-haproxy:6379
  2. install redis-ha chart as follows:
helm install --name test  --set haproxy.enabled=true --set haproxy.replicas=3 stable/redis-ha
  1. after a while, check 3 redis instances, and found all is role:slave
kubectl exec -ti test-redis-ha-server-0 -- redis-cli info replication | grep role
kubectl exec -ti test-redis-ha-server-1 -- redis-cli info replication | grep role
kubectl exec -ti test-redis-ha-server-2 -- redis-cli info replication | grep role
@DandyDeveloper
Copy link
Collaborator

@jeremyxu2010 I have not been able to reproduce or experience this. Can you check with the latest version of the chart and make sure you purge everything before installing?

I remember once having a race condition with the initial pod being too slow to initialize, resulting in some funky issues, but I only had that once and wasn't ever able to replicate it.

@jeremyxu2010
Copy link
Contributor Author

@DandyDeveloper I found the problem.

When deploy haproxy with multiple replicas, two different haproxy instances may choose different redis master instance, I guess this problem is caused by haproxy's check interval. Then the wrong redis config min-slaves-to-write caused two redis instances been written.

The redis config min-slaves-to-write should be min-replicas-to-write in version redis 5.x, see PR

@DandyDeveloper
Copy link
Collaborator

@jeremyxu2010

When deploy haproxy with multiple replicas, two different haproxy instances may choose different redis master instance

This concerns me as it shouldn't happen, they based their master selection from the ping to the Redis instances.

It will explicitly look for the master.

Is this something you think we need more support for in the chart? As it looks like you solved it. Let me know. If it's more a niche use-case. Please close this issue.

Thank you!

@jeremyxu2010
Copy link
Contributor Author

jeremyxu2010 commented Sep 10, 2019 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants