-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Socket closed unexpectedly & ECONNREFUSED #2141
Comments
I don't think it is a problem with master and slave. |
I'm having the same issues, what is worse there is no good way to handle the exception, the reconnect stuff just plain doesn't work. Until its fixed, there is no way to use node-redis in any production capacity. |
It took me several hours to reach here, only to know it's not fixable.
Does the below config even work? |
The reconnection of createCluster has an issue, but there is a simple solution to fix it. const redis = require('redis');
const client = redis.createCluster({
rootNodes: [
{
url: 'redis://127.0.0.1:7001',
// Fix overridden default socket options
socket: {},
},
],
defaults: {
socket: {
connectTimeout: 10000,
reconnectStrategy: (/* retries, cause */) => {
return 5000;
},
},
},
});
await client.connect(); |
I cannot reproduce this failure. node-redis 4.6.6 My code: 'use strict';
const redis = require('redis4');
const log = require('log4js').getLogger(); log.level = 'debug';
const client = redis.createCluster({
rootNodes: [
{
url: 'redis://127.0.0.1:7010',
// Fix overridden default socket options
// socket: {},
},
],
defaults: {
socket: {
connectTimeout: 10000,
reconnectStrategy: (/* retries, cause */) => {
return 5000;
},
},
},
});
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
async function main() {
client.on('error', e => log.error('Redis cliente error: %s', e.message));
await client.connect();
while (true) {
try {
const foo = await client.type('foo');
log.info("type of foo: %o", foo);
} catch (e) {
log.error("error: %s", e.message);
}
await sleep(5000);
}
}
main(); Output:
It's a bit weird that there are two successful Redis calls after the socket error (that's when I killed one of the master nodes), but you can see that it reconnected just fine once I brought that node back up. |
I can only reproduce this inside k8s (update: I was also able to reproduce this in docker) I'm running a redis-cluster with bitnami
I run the following + code from above in a pod: const client = redis.createCluster({
rootNodes: [
{
url: "redis://test-redis-cluster-0.test-redis-cluster-headless:6379",
},
{
url: "redis://test-redis-cluster-1.test-redis-cluster-headless:6379",
},
{
url: "redis://test-redis-cluster-2.test-redis-cluster-headless:6379",
},
{
url: "redis://test-redis-cluster-3.test-redis-cluster-headless:6379",
},
{
url: "redis://test-redis-cluster-4.test-redis-cluster-headless:6379",
},
{
url: "redis://test-redis-cluster-5.test-redis-cluster-headless:6379",
},
],
defaults: {
password: "XXXXXXXX",
socket: {
connectTimeout: 10000,
reconnectStrategy: (/* retries, cause */) => {
return 5000;
},
},
},
}); After killing a master redis pod that the client was connected to Output:
This loops forever until I restart the pod (the only way to fix this) When I kill a redis pod a new pod spins up with a new IP address. This makes it impossible to run this in production in k8s. |
I've drafted out a fix #2701 for my use case. |
Hi! I'm working with a Redis Cluster with four masters and four slaves distributed in two different hosts, with the following configuration:
Host 1:
Master in port 6000 (nodes 0 to 4095);
Master in port 6001 (nodes 8192 to 12287);
Slave in port 6002 (master in host 2, port 6001);
Slave in port 6003 (master in host 2, port 6000);
Host 2:
Master in port 6000 (nodes 4096 to 8191);
Master in port 6001 (nodes 12288 to 16383);
Slave in port 6002 (master in host 1, port 6001);
Slave in port 6003 (master in host 1, port 6000);
All is set that in case a master goes down, the corresponding slave takes its place. In Redis this works fine, but in node-redis once a master goes down it keeps trying to reconnect to the closed socket and ignores the new master, wich keeps printing in console the same error: connect ECONNRFUSED XXX.XXX.XXX.XXX:6000
I have the following configuration for my cluster in node-redis:
Is there anything I need to change in my configuration to stop the error from appearing and connect to the new master?
Thanks in advance for the help.
Environment:
The text was updated successfully, but these errors were encountered: