-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for getting clients in a given room (Works across multiple nodes) #15
Conversation
…s to key names, attempt Travis build
…will not work in multi-node environment. Fix: http://stackoverflow.com/questions/23858604/how-to-get-rooms-clients-list-in-socket-io-1-0 Request: socketio/socket.io#1428 Pull requests: - socketio/socket.io#1630 - socketio/socket.io-adapter#5 - socketio/socket.io-redis-adapter#15
The client id was missing.
@FREEZX, @guille I corrected some little bugs in FREEZX/pull/1, FREEZX/pull/2 and FREEZX/pull/3 For the sync of clients and rooms data, I am considering the following:
What do you think? |
Add client.id argument to the call of Adapter.del
Corrected deletion key on disconnect
This is definitely headed in the right direction. Happy to proceed with adding this API. One thing i'd like, however, is to first start by documenting the |
I think we also need to document the schema we're keeping in Redis on this repository's README. Especially in light of the added data complexity this patch will introduce, and the possibility for improper / incomplete cleanup in the event of a crash. |
+1 to complete the README to allow users make their own cleanup implementation. But even I know this merge is waited for a long time I think adding the node uid (as a record) on each socket SET in Redis could be helpful (at least for cleanup implementation). In future releases adding a node SET on startup (with expire and refresh) will allow a node to determine the state of other nodes easily (and socket from other nodes). |
README documentation
@dbrugne approved. |
@FREEZX, same for socket.io clients method: FREEZX/socket.io#1 |
@dbrugne Yep, cool. |
I didn't manage to check the source code for this pull request, but i'm wondering if a 'pure redis' implementation wouldn't be the case? As in: each worker saving it's own state on redis?
The only issue i can see is if the worker doesn't shutdown gracefully, other than that this implementation should solve the issue? |
@FREEZX I plan to use this pull request until it's patched in the official API. However, I'm wondering how to actually use your Currently I'm doing this: // set up the adapter
var adapter = require('socket.io-redis')({ host: 'localhost', port: 6379 })
io.adapter(adapter);
// Get all clients in a room
adapter.prototype.clients(room, function(err, data){
console.log(data); // ['K0ce8P0WP_4vM2evAAAB', ...] This works!
}); This seems to work, but is accessing |
@artofspeed Are you using my fork of socket.io? |
@FREEZX with your fork can we know when a user entered or left a room? i had to implement this on my application and end up having to do a couple of workarounds with sockets.id which i don't even know if it's unique across multiple instances running. |
I do not need this anymore, i've switched to primus with the ws transformer, so i won't be working on this year old PR. If anyone wants to take over, be my guest. |
@FREEZX can primus-cluster and primus-rooms find out the number of clients inside a room out of the box? |
@FREEZX interesting. what plugin do you use to replace this redis adapter and broadcasts messages to running node instances with primus ? |
I'm also interested in this :) |
@perrin4869 Only if you use https://github.com/cayasso/primus-rooms#primusroomroomclientsfn and then count the number of results. It should be fairly straightforward to add a count feature. |
+1, any news on this? |
@rauchg Any chance this going to be merged at all? Lots of people interested in the feature. |
Hello @FREEZX! Looks like I also decided to go with Primus, but not sure which transformer is better to use. Is |
Just found a nice article on topic of choosing transformers (with performance tests): https://medium.com/@denizozger/finding-the-right-node-js-websocket-implementation-b63bfca0539 |
@jsdream I made a detailed benchmark including all node websocket technologies i found while developing my own web framework in order to choose the best possible approach: |
@FREEZX Thank you very much for the response. Btw, your framework looks very interesting. Will consider it for future projects ;-) |
@jsdream Thanks, hope you enjoy using it. :) I was previously working with mean.js, but it had some bugs, and it was difficult making it suit my needs, work with websockets and making it scalable, so instead i went out and did this. |
@FREEZX Yeah, currently I struggling with MEAN.js to get websockets working in cluster mode. On the benchmark you've provided I noticed this: "Nuke.js uses the websockets transformer by default, because it is the fastest transformer available, and does not require sticky sessions support on your load balancer." Does this really mean there is no handshake problem when using ws module with primus and node.js cluster? Sorry for such kind of questions, I am just really new to websockets. |
@jsdream Yes, since it doesn't require sticky sessions, and judging from the manual, that's the only issue that breaks primus with cluster. I also remember asking the same question on the primus IRC, i think i got the same response. server: irc.freenode.net |
Cool. Thank you again! |
There's been a lot of discussion going on here. Inability to support this is kind of making me look towards other libraries... |
@rauchg Two years... Will it be merged? |
+1 What's the status on this? @rauchg |
This PR is outdated, and broken since new updates to both socket.io and the adapter. As the PR author has abandoned it (which I cannot condemn), it's probably not going to get merged any time soon, if ever. |
I have had an idea: how about for the lifetime of the connected socket we periodically set a key on redis with a TTL of some time. If the socket disconnects, or if the server crashes, redis would remove the keys using it's |
Closed by #109 |
I added basic support for the redis adapter for storing and getting clients (As an async call) or a room in the redis database.
I didn't manage to do the exit cleanup correctly, so after normal exit, rooms are still filled up with the same (now unexisting) socket ids. Quitting with Ctrl+C or the kill signal should work as expected.
Also added a test for the clients function call