-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple servers working as a cluster #42
Comments
This would be a fantastic feature for load-balanced environments! Are you planning to implement it soon? |
This is in the road map for a long time. If you have any suggestion about this topics let me know. Regards |
I'd say that both of these things should be implemented by the app, not by the module. Messages may arrive out-of-order and the app should deal with this gracefully. I think that the module should just re-publish to other publisher endpoints specified in it's configuration. I wouldn't try to solve all the problems at once. Thanks for your response! Adam |
Maybe am I out of topic, but wouldn't it be easier to distribute channels among servers instead of trying to distribute messages ? |
Not always you have control over where user gets connected to - the load balancer decides. |
Sure but it's the same thing with memcache or other distributed services. Distributing channels should deal with message id trouble, then one have to find a way to distribute data among servers, with a circular hash for example (which would deal with data redundancy). Is that a bad way ? |
Isn't a bad way at all, but imagine that you don't have an uniformly subscribers distribution between channels. |
Actually, I think our PoV differs due to the use case each of us has in mind. Mine is that many users subscribe each to only one channel (theirs) but the subscribe request can be done on many servers. Your use case is really different, but I'm not sure your hypothetical solution would do the trick. If we look at (TCP ?) connections, the described solution would lead to :
So with this solution you would have twice as connections and I'm not sure you'd lower the load on any server. Though I must admit you would have a better memory distribution (even if it then will depend on the messages size). I may be wrong in my analysis too. Regards |
Yes, your analysis is wrong :) Regards |
So you would stick to a memory space equal to the server memory that has the less shared mem available ? That's why I said the use cases differ in our different PoV, we probably have different scalable architecture in mind :) |
+1 for setting message id by an application. It makes use of custom versioning possible and ensures consistency between multiple modules in load balanced environments. I would also really like to have an upstream mode, which would help make clustered and load balanced farm of servers where backend application does not have to know all servers locations (some form of inversion of control). We could use one instance of module as an upstream server or make backend to be one. It would be also nice to have conditional get based on version - when you already have current version you will not get it again, even if hitting another server. Now, if there are any frontends subscribed to this event, they are informed of changes. As you see our system is some sort of state replicator as opposed to bare publish-subscribe. Now we are looking for some alternatives, as after these years new technologies arrived such as websockets (we have only long polling). Nginx push module would be ideal replacement for our frontends, but in current shape it just does not fit... Having custom versions support, upstream mode and (possibly) support for specifying current version by client would be great :) Since this issue is idle for a quite long time are there any possibilities of implementing these features? Regards |
how about using a "scalability protocol" library approach like http://zeromq.org/ or https://github.com/nanomsg/nanomsg Thanks! |
Hi, Is this still on the road map? |
yes. but, unfortunately I'm currently "out of time" |
Okey, I have bin looking at http://nats.io/ as a messaging systems between websocket servers and have had good results in terms of speed and scaling there is a official nginx client https://github.com/nats-io/nginx-nats I don't know how you have bin thinking about building the cluster solution. Just wanted to mention it. |
Is this option available? Can I set message id on the application side? |
Add the functionality to multiple servers works as a cluster, without use external tools like redis or memcached
The text was updated successfully, but these errors were encountered: