-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'Duplicate Route' connection floods on implicit routes #5483
Labels
defect
Suspected defect such as a bug or regression
Comments
derekcollison
added a commit
that referenced
this issue
Jul 22, 2024
This is an alternate approach to the PR #5484 from @wjordan. Using the code in that PR with the test added in this PR, I could still see duplicate routes (up to 125 in one of the matrix), and still had a data race (that could have easily be fixed). The main issue is that the increment happens in connectToRoute, which is running from a go routine, so there were still chances for duplicates. Instead, I took the approach that those duplicates were the result of way too many gossip protocols. Suppose that you have servers A and B already connected. C connects to A. A gossips to B that it should connect to C. When that happened, B would gossip to A the server C and C would gossip to A the server B, which all that was unnecessary. It would grow quite fast with the size of the cluster (that is, several thousands for a cluster size of 15 or so). Resolves #5483 Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
neilalexander
pushed a commit
that referenced
this issue
Jul 29, 2024
This is an alternate approach to the PR #5484 from @wjordan. Using the code in that PR with the test added in this PR, I could still see duplicate routes (up to 125 in one of the matrix), and still had a data race (that could have easily be fixed). The main issue is that the increment happens in connectToRoute, which is running from a go routine, so there were still chances for duplicates. Instead, I took the approach that those duplicates were the result of way too many gossip protocols. Suppose that you have servers A and B already connected. C connects to A. A gossips to B that it should connect to C. When that happened, B would gossip to A the server C and C would gossip to A the server B, which all that was unnecessary. It would grow quite fast with the size of the cluster (that is, several thousands for a cluster size of 15 or so). Resolves #5483 Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
neilalexander
pushed a commit
that referenced
this issue
Jul 29, 2024
This is an alternate approach to the PR #5484 from @wjordan. Using the code in that PR with the test added in this PR, I could still see duplicate routes (up to 125 in one of the matrix), and still had a data race (that could have easily be fixed). The main issue is that the increment happens in connectToRoute, which is running from a go routine, so there were still chances for duplicates. Instead, I took the approach that those duplicates were the result of way too many gossip protocols. Suppose that you have servers A and B already connected. C connects to A. A gossips to B that it should connect to C. When that happened, B would gossip to A the server C and C would gossip to A the server B, which all that was unnecessary. It would grow quite fast with the size of the cluster (that is, several thousands for a cluster size of 15 or so). Resolves #5483 Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
ReubenMathew
pushed a commit
to ReubenMathew/nats-server
that referenced
this issue
Jul 29, 2024
This is an alternate approach to the PR nats-io#5484 from @wjordan. Using the code in that PR with the test added in this PR, I could still see duplicate routes (up to 125 in one of the matrix), and still had a data race (that could have easily be fixed). The main issue is that the increment happens in connectToRoute, which is running from a go routine, so there were still chances for duplicates. Instead, I took the approach that those duplicates were the result of way too many gossip protocols. Suppose that you have servers A and B already connected. C connects to A. A gossips to B that it should connect to C. When that happened, B would gossip to A the server C and C would gossip to A the server B, which all that was unnecessary. It would grow quite fast with the size of the cluster (that is, several thousands for a cluster size of 15 or so). Resolves nats-io#5483 Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
bruth
pushed a commit
that referenced
this issue
Jul 29, 2024
This is an alternate approach to the PR #5484 from @wjordan. Using the code in that PR with the test added in this PR, I could still see duplicate routes (up to 125 in one of the matrix), and still had a data race (that could have easily be fixed). The main issue is that the increment happens in connectToRoute, which is running from a go routine, so there were still chances for duplicates. Instead, I took the approach that those duplicates were the result of way too many gossip protocols. Suppose that you have servers A and B already connected. C connects to A. A gossips to B that it should connect to C. When that happened, B would gossip to A the server C and C would gossip to A the server B, which all that was unnecessary. It would grow quite fast with the size of the cluster (that is, several thousands for a cluster size of 15 or so). Resolves #5483 Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
bruth
pushed a commit
that referenced
this issue
Jul 29, 2024
This is an alternate approach to the PR #5484 from @wjordan. Using the code in that PR with the test added in this PR, I could still see duplicate routes (up to 125 in one of the matrix), and still had a data race (that could have easily be fixed). The main issue is that the increment happens in connectToRoute, which is running from a go routine, so there were still chances for duplicates. Instead, I took the approach that those duplicates were the result of way too many gossip protocols. Suppose that you have servers A and B already connected. C connects to A. A gossips to B that it should connect to C. When that happened, B would gossip to A the server C and C would gossip to A the server B, which all that was unnecessary. It would grow quite fast with the size of the cluster (that is, several thousands for a cluster size of 15 or so). Resolves #5483 Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Observed behavior
When a large cluster establishes connections to an implicit route, all of the nodes can flood a host with a large number of redundant 'Duplicate Route' connections.
In production with a large cluster of 100 nodes, I've observed intermittent floods of
Router connection closed: Duplicate Route
logged, up to 10k connections/sec to a single server, when a server experiences a broken/unreliable network connection to the rest of the cluster.Expected behavior
I would expect 'Duplicate Route' connections to be relatively rare and the rate of connection attempts for any given route to be limited to the retry rate (one per second), instead of an unbounded flood of duplicate connections being sent all at once.
Server and client version
nats-server: v2.10.16
Host environment
No response
Steps to reproduce
My proposed fix in f3d3565 includes a test demonstrating the issue, counting the total number of 'Duplicate Route' log entries when establishing a 10-server cluster:
nats-server/server/routes_test.go
Lines 4357 to 4369 in f3d3565
Without any fix, the number of duplicate routes:
My best understanding is that there's a feedback loop on implicit routes:
addRoute
), the server sends anINFO
broadcast to all servers in the cluster (forwardNewRouteInfoToKnownServers
).processImplicitRoute
), it creates an implicit-route connection (connectToRoute
).addRoute
), see 1.Although
processImplicitRoute
skips the connection if it's explicitly-configured or if the route already exists, this only checks against registered routes (not unregistered routes or unestablished connections), so a flood of connection attempts can pile up before a successful route registration prevents future connections.With a fix that checks the route-connection count before dialing a new connection, the number of duplicate routes isn't completely eliminated for larger clusters, but is significantly less:
The text was updated successfully, but these errors were encountered: