Skip to content
This repository has been archived by the owner on Dec 18, 2018. It is now read-only.

Finalize Redis scaleout design and implementation #288

Closed
muratg opened this issue Mar 10, 2017 · 10 comments
Closed

Finalize Redis scaleout design and implementation #288

muratg opened this issue Mar 10, 2017 · 10 comments

Comments

@muratg
Copy link

muratg commented Mar 10, 2017

No description provided.

@muratg muratg added this to the 1.0.0-preview1 milestone Mar 10, 2017
@muratg muratg modified the milestones: 1.0.0, 1.0.0-preview1 Apr 14, 2017
@muratg muratg modified the milestones: 2.1.0-preview1, 2.1.0 May 26, 2017
@muratg muratg mentioned this issue Jun 12, 2017
55 tasks
@muratg muratg self-assigned this Jun 16, 2017
@muratg
Copy link
Author

muratg commented Jun 16, 2017

@davidfowl @mikaelm12 @moozzyk @BrennanConroy we'll need a design meeting.

@muratg muratg assigned davidfowl and unassigned muratg Jun 23, 2017
@davidfowl
Copy link
Member

The current redis design is optimized for to have a single connection per server and a minimal number of connections per subscription. This means that we have:

  • A subscription per connection
  • A subscription per user id
  • A subscription per hub
  • A subscription per group

A few things that are implemented but need to be solidified:

  • Today we use JSON serialization (with type name handling) as the internal format to pass data between servers. Should we pick something more efficient?
  • We don't support reconnecting to redis. Do we queue if the connection dies and data is sent or should we fail?
  • We need to support adding connections to groups for connection that don't exist on the local server.
  • Today we always publish through redis even if the connection is on the local machine, we don't need to do this any more since we require stickiness.

@abergs
Copy link

abergs commented Aug 2, 2017

since we require stickiness.

Do you mean that the client connection is required to be sticky (e.g. when load balancing several front end servers)? @davidfowl

@abergs
Copy link

abergs commented Aug 2, 2017

Just watched the NDC London video, i now understand what you mean with stickiness @davidfowl

@muratg muratg modified the milestones: 2.1.0-preview1, 1.0.0-alpha1 Aug 18, 2017
@muratg
Copy link
Author

muratg commented Aug 18, 2017

We plan to do #584 in alpha.

@ThatRendle
Copy link

I was all set to go "WHAT? STICKINESS?" but then I checked whether you can do that running in a Docker Swarm with the Traefik LB/proxy, and it turns out you absolutely can, so OK, as you were.

Carry on

@moozzyk
Copy link
Contributor

moozzyk commented Sep 15, 2017

@markrendle - try long polling.

@BrennanConroy
Copy link
Member

A few things that are implemented but need to be solidified:

  • Today we use JSON serialization (with type name handling) as the internal format to pass data between servers. Should we pick something more efficient?
  • We don't support reconnecting to redis. Do we queue if the connection dies and data is sent or should we fail?
  • We need to support adding connections to groups for connection that don't exist on the local server.
  • Today we always publish through redis even if the connection is on the local machine, we don't need to do this any more since we require stickiness.
  • Reconnecting is enabled by default now, we don't do anything special so if you lose connection and try to publish then it will throw from your invocation. And when you reconnect publishes work again.
  • Adding connections to groups for connections that don't exist on the local server is supported, and waits for an ACK before completing.
  • We have to publish through redis in all scenarios except when invoking on a specific connection that is on the local server and when adding or removing a local connection from a group, we handle those scenarios.

@davidfowl
Copy link
Member

davidfowl commented Nov 3, 2017

Today we use JSON serialization (with type name handling) as the internal format to pass data between servers. Should we pick something more efficient?

I think this is the last thing we need to look over before closing this issue. Everything else looks good. We should do some initial performance work to see what a certain application scenarios look like with redis (many groups, many connections, etc).

@muratg
Copy link
Author

muratg commented Nov 30, 2017

Closing per triage. This is done.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants