Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serf Lua RPC Client #865

Closed
subnetmarco opened this issue Jan 12, 2016 · 4 comments
Closed

Serf Lua RPC Client #865

subnetmarco opened this issue Jan 12, 2016 · 4 comments
Labels
task/feature Requests for new features in Kong

Comments

@subnetmarco
Copy link
Member

Create a Serf client in Lua that follows Serf RPC protocol: https://www.serfdom.io/docs/agent/rpc.html

@subnetmarco subnetmarco added the task/feature Requests for new features in Kong label Jan 12, 2016
@Tieske
Copy link
Member

Tieske commented Jan 12, 2016

some info on arbitrary TCP connections; https://groups.google.com/forum/#!topic/openresty-en/88E7-1CIowM

@Tieske
Copy link
Member

Tieske commented Feb 5, 2016

Doing this involves setting up an outgoing tcp connection to the serf agent running on the same host.

Basics work; connecting to the agent and sending/receiving some simple commands. One tricky part is the MessagePack library, it might need some changes to support streaming messages.

An issue that is harder to solve (at least for a proper solution) is the synchronisation between worker processes. If implemented in OpenResty worker processes, each worker will have an open TCP connection, so each worker process will also get all events. So for example, 3 worker processes all receive an invalidation event, and they in turn all three invalidate the in memory cache. That would result in 3 expensive DB requests, instead of only one. That is undesirable.

Having only 1 worker setup the tcp connection to the serf agent seems dangerous to me as events might get lost if that worker process crashes.

The issue can be reduced to: how to reduce multiple instances of the same incoming event (one for each worker) to a single one.

Issues to solve;

  • streaming support when unpacking messagepack data
  • reducing multiple instances of the same event to a single one
  • reloading config; when nginx reloads, detect it, and shutdown the tcp connections so the worker process can actually exit

@subnetmarco
Copy link
Member Author

Having only 1 worker setup the tcp connection to the serf agent seems dangerous to me as events might get lost if that worker process crashes.

I think this can be fixed with locks the same way we handle reports and cluster keep-alive pings. A possible implementation will look like a basic leader-election across the workers:

  • Every worker does listen on serf, but only one will execute actions.
  • Every worker will try to get the lock for a specific key in the shared dict that determines who is going to be the worker that executes the action.
  • Only one worker will be able to get the lock (like we do for reports or cluster keep-alive pings), and he will keep renewing it so that the other workers cannot acquire the same lock.
  • If the worker crashes, the key will turn to an unlocked because it can't renew the lock anymore (it crashed).
  • Another worker takes the lock (aka, becomes the leader), and keeps it. Events are not lost because they are effectively received by every worker, and only then they decide who is going to execute actions.

@sonicaghi sonicaghi added this to the 0.9 milestone May 16, 2016
@Tieske Tieske removed this from the 0.9 milestone Jul 20, 2016
@Tieske
Copy link
Member

Tieske commented Feb 13, 2017

closing this in favour of #835 (remove dependencies)

@Tieske Tieske closed this as completed Feb 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
task/feature Requests for new features in Kong
Projects
None yet
Development

No branches or pull requests

3 participants