-
Notifications
You must be signed in to change notification settings - Fork 16.8k
[stable/redis] Slave fails to connect to master with connection timeout #17245
Comments
Hi @carmenlau I was unable to reproduce the issue using the same version of the chart and installing the chart using the parameters below: $ helm install stable/redis --set networkPolicy.enabled=true --name my-release As you can see in the logs: $ kubectl logs my-release-redis-slave-0
11:45:02.13 INFO ==> ** Starting Redis **
...
1:S 20 Sep 2019 11:45:32.209 * MASTER <-> REPLICA sync started
1:S 20 Sep 2019 11:45:32.209 * Non blocking connect for SYNC fired the event.
1:S 20 Sep 2019 11:45:32.210 * Master replied to PING, replication can continue...
1:S 20 Sep 2019 11:45:32.211 * Partial resynchronization not possible (no cached master)
1:S 20 Sep 2019 11:45:32.212 * Full resync from master: 3c17aa9f3441d76d39243dfd7cae2dd195269e93:0
1:S 20 Sep 2019 11:45:32.250 * MASTER <-> REPLICA sync: receiving 175 bytes from master
1:S 20 Sep 2019 11:45:32.251 * MASTER <-> REPLICA sync: Flushing old data
1:S 20 Sep 2019 11:45:32.251 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 20 Sep 2019 11:45:32.251 * MASTER <-> REPLICA sync: Finished with success Could you share the complete set of parameters you're using? |
Thanks for your reply @juan131! I used the
|
Hi @carmenlau I was unable to reproduce the issue with your values either. See: $ helm install stable/redis -f your-values.yaml --name my-release
$ kubectl get networkpolicy my-release-redis -o json | jq '.spec.ingress'
[
{
"from": [
{
"podSelector": {
"matchLabels": {
"my-release-redis-client": "true"
}
}
},
{
"podSelector": {
"matchLabels": {
"app": "redis",
"release": "my-release",
"role": "metrics"
}
}
},
{
"podSelector": {
"matchLabels": {
"app": "redis",
"release": "my-release",
"role": "slave"
}
}
}
],
"ports": [
{
"port": 6379,
"protocol": "TCP"
},
{
"port": 26379,
"protocol": "TCP"
}
]
},
{
"ports": [
{
"port": 9121,
"protocol": "TCP"
}
]
}
]
$ kubectl get pods -l app=redis,release=my-release,role=slave
NAME READY STATUS RESTARTS AGE
my-release-redis-slave-0 3/3 Running 3 7m11s
my-release-redis-slave-1 3/3 Running 0 6m8s After some restarts (while the master pod was being initialised) the slave pods were able to connect with it. As you can see, if I inspect the networkpolicy and look for the pods which labels are authorized to connect with the master pod, I obtain the list of slave pods. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
I'm having the same issue with a new deployment. My values.yaml file is pretty simple. cluster:
enabled: true
slaveCount: 1
networkPolicy:
enabled: true
rbac:
create: true
usePassword: false |
Having the same issue. |
What version of the Redis Helm chart are you using? |
@juan131 here is what I've deployed helm3 -n redis list
redis redis 3 2019-11-06 12:13:47.789516 -0500 EST deployed redis-9.5.1 5.0.5 |
Hi Juan131....i redeployed the same chart in a newly provisioned cluster (1.12 k8s). |
In your YAML configuration, change the service to type LoadBalancer. This could be for
|
I'm investigating the same issue : when activated, the NetworkPolicy gives this issue. If I disable the network policy, then slaves can connect to master. |
Having the same issue over here with |
@rsecob : can you please share here your egress policy that you think it could allow connections from redis slave to master ? |
I basically added |
I am having the same issues as everyone above. When the networkPolicy is set too
does anyone have any idea on a reliable way to fix this? |
@rsecob : I also tried defining the namespaceSelector to match my namespace label, but it didn't change anything. Here is my networkPolicy spec section : spec:
podSelector:
matchLabels:
app: redis
release: webs-sentinel
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- port: 6379
protocol: TCP
- port: 26379
protocol: TCP
- ports:
- port: 9121
protocol: TCP
egress:
- ports:
- port: 6379
protocol: TCP
- port: 26379
protocol: TCP
to:
- podSelector:
matchLabels:
app: redis
release: webs-sentinel
- namespaceSelector:
matchLabels:
product: pervers-pepere |
Try again putting a dash before the
for some reasons it seemed to have an effect on my end |
@rsecob : Same error when applying the network policy with a dash before the 'to', as you asked :
When I delete the network policy, and restart the slave statefulset, pods are Running and I get very different logs instead :
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
Facing same issue. |
Hi, Given the In this issue, we tried to explain more carefully the reasons and motivations behind this transition, please don't hesitate to add a comment in this issue if you have any question related to the migration itself. Regards, |
@engineerakki
Further debugging: use tshark |
Given the In this issue we tried to explain more carefully the reasons and motivations behind this transition, please don't hesitate to add a comment in this issue if you have any question related to the migration itself. |
Describe the bug
Slave fails to connect to master with connection timeout.
Which chart:
stable/redis version 9.1.11
What happened:
Following is the log of slave pod redis container.
How to reproduce it (as minimally and precisely as possible):
Install
stable/redis
with cluster and networkpolicy enabled.Anything else we need to know:
I tried removing the egress part of the network policy manually, then the slaves can connect to the master successfully.
The text was updated successfully, but these errors were encountered: