Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Event loop does not exit when redis cluster loses contact #1500

Closed
tomaratyn opened this issue Feb 2, 2022 · 1 comment
Closed

Event loop does not exit when redis cluster loses contact #1500

tomaratyn opened this issue Feb 2, 2022 · 1 comment

Comments

@tomaratyn
Copy link

tomaratyn commented Feb 2, 2022

If an ioredis client is configured to talk to a cluster and then loses contact with the cluster then the client shutsdown but Node does not exit. Node doesn't exit because some of the script cache clearing Intervals do not get cleared from the event loop.

This situation has a severe impact if an app loses contact with the cluster. Many of us rely on Node being able to exit and have our systems restart a new process of our ioredis app.

I believe this also may be related to other cluster related issues.

Consider the following script (this was tested from examples/simple_cluster_case.ts):

import { Cluster, default as RedisClient } from "../lib";

import { v4 as uuidv4 } from "uuid";

export const ALL_STOP_SIGNALS = new Set([
  "SIGINT",
  "SIGTERM",
  "uncaughtException",
  "unhandledRejection",
]);

const sleep = (ms = 10) =>
  new Promise((resolve) => setTimeout(() => resolve(null), ms));

// @ts-ignore
type RedisClusterClient = Cluster & RedisClient;

const cluster = new Cluster(
  [
    {
      host: "127.0.0.1",
      port: 7000,
    },
    {
      host: "127.0.0.1",
      port: 7001,
    },
    {
      host: "127.0.0.1",
      port: 7002,
    },
    {
      host: "127.0.0.1",
      port: 7003,
    },
    {
      host: "127.0.0.1",
      port: 7004,
    },
    {
      host: "127.0.0.1",
      port: 7005,
    },
  ],
) as RedisClusterClient;

let keepGoing = true;

ALL_STOP_SIGNALS.forEach((signal) => {
  // @ts-ignore
  process.on(signal, () => {
    keepGoing = false;
  });
});

cluster.on("error", (...args) => {
  console.error("Got error: %o", args);
  keepGoing = false;
  quitAndDisconnectShutdown()
});

cluster.on("close", (...args) => {
  console.log("got close event: %o", args);
});


cluster.on("reconnecting", (...args) => {
  console.error("Got reconnecting: %o", args);
});


const quitAndDisconnectShutdown = () => {
  cluster.quit((...args) => {
    console.log("Quit Returned: %o", args);
    cluster.disconnect(false);
    console.log("disconnected");
  });
};

const main = async () => {
  while (keepGoing) {
    const key = uuidv4();
    cluster.set(key, "bar");
    cluster.get(key, (err, result) => {
      const matches = result === "bar" ? "✅" : "❌";
      console.log(
        `${matches} : For ${key} result is ${result}. Expected is bar. `
      );
    });
    await sleep(100);
  }
  console.log("post loop");

  quitAndDisconnectShutdown();
};

main();

Then disable the cluster.

The client will shut down but node won't completely exit. Investigations with wtf-node revealed some left over Intevals.

@tomaratyn
Copy link
Author

I can confirm that #1502 solved this for us.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant