Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Faster joins: support worker-mode deployments #12994

Closed
Tracked by #14030
richvdh opened this issue Jun 9, 2022 · 5 comments
Closed
Tracked by #14030

Faster joins: support worker-mode deployments #12994

richvdh opened this issue Jun 9, 2022 · 5 comments
Assignees
Labels
A-Federated-Join joins over federation generally suck T-Enhancement New features, changes in functionality, improvements in performance, or user-facing enhancements.

Comments

@richvdh
Copy link
Member

richvdh commented Jun 9, 2022

Including:

    • Make the partial-state coordination work correctly across workers.
@richvdh richvdh added A-Federated-Join joins over federation generally suck T-Enhancement New features, changes in functionality, improvements in performance, or user-facing enhancements. labels Jun 9, 2022
@richvdh
Copy link
Member Author

richvdh commented Jun 9, 2022

Particular points of note:

  • # TODO(faster_joins): do we need to lock to avoid races? What happens if other
    # worker processes kick off a resync in parallel? Perhaps we should just elect
    # a single worker to do the resync.
    # https://github.com/matrix-org/synapse/issues/12994
  • # TODO(faster_joins): support workers
    # https://github.com/matrix-org/synapse/issues/12994
    assert (
    self._storage_controllers.persistence is not None
    ), "worker-mode deployments not currently supported here"
    (fixed in Faster room joins: fix race in recalculation of current room state #13151)
  • async def update_current_state(self, room_id: str) -> None:
    """Recalculate the current state for a room, and persist it"""
    state = await self._calculate_current_state(room_id)
    delta = await self._calculate_state_delta(room_id, state)
    # TODO(faster_joins): get a real stream ordering, to make this work correctly
    # across workers.
    # https://github.com/matrix-org/synapse/issues/12994
    #
    # TODO(faster_joins): this can race against event persistence, in which case we
    # will end up with incorrect state. Perhaps we should make this a job we
    # farm out to the event persister thread, somehow.
    # https://github.com/matrix-org/synapse/issues/13007
    #
    stream_id = self.main_store.get_room_max_stream_ordering()
    await self.persist_events_store.update_current_state(room_id, delta, stream_id)
    (fixed in Faster room joins: fix race in recalculation of current room state #13151)
  • # TODO(faster_joins): need to do something about workers here
    # https://github.com/matrix-org/synapse/issues/12994
    txn.call_after(self.is_partial_state_event.invalidate, (event.event_id,))

@squahtx
Copy link
Contributor

squahtx commented Sep 28, 2022

@richvdh
Copy link
Member Author

richvdh commented Oct 10, 2022

There are a couple of things to resolve here:

  • How do we decide which worker should do the resync operation, in the face of potential concurrent joins, and process restarts?

    This is actually easy, as things currently stand: /send_join requests are always sent by the master process (via ReplicationRemoteJoinRestServlet). Hence, the resync process will also take place there, and we should make sure only to _resume_sync_partial_state_room on the master process (as we do today, it turns out).

  • Once we un-partial-state an event, we need to tell other workers about it, so that they can:

    • flush their cache for is_partial_state_event
    • flush their cache for _get_state_group_for_event
    • if the event changed rejection status, do _invalidate_local_get_event_cache
    • unblock any requests which are waiting for full state at that event

    I think this probably "just" wants to be a new replication stream listing the event ids, and the old and new rejection status, of any un-partial-stated events. This can then be used to flush the caches and to call _state_storage_controller.notify_event_un_partial_stated.

    I think most of the infrastructure for the stream is already present (in synapse.replication.tcp.streams). We'll need to define a new stream type, and add a new stream table to contain the data.

    (It would be nice to do this in a way that doesn't involve a stream table (getting rid of the replication "stream" tables #13456), but that probably adds an unacceptable impediment to this project.)

  • Once we un-partial-state an entire room, we need to tell other workers about it, so that they can unblock any requests waiting for it.

    This ties into Faster joins: non-lazy_load_members /syncs should not block #12989, in that both problems need a way of associating a stream ordering with a room un-partial-stating. In fact, I think we should fix that first, because a solution for Faster joins: non-lazy_load_members /syncs should not block #12989 is likely to Just Work here, whereas the reverse is not true.


Aside: in future we should consider moving the /send_join request out of the master process. The obvious candidate is the "client reader" that receives the client-side /join request (and hence currently makes the request to ReplicationRemoteJoinRestServlet). The main thing to worry about then is locking (to ensure that we don't have multiple workers all trying to do the remote-join dance at once). For prior art in that department, we should look at the code that handles incoming events received over federation (https://github.com/matrix-org/synapse/blob/v1.69.0rc2/synapse/federation/federation_server.py#L1108-L1116), which uses a database row to hold a lock: we can simply call try_acquire_lock before starting a resync operation.

That still leaves us with the problem of making sure we resume the partial-state resync if the client reader that is currently processing it gets restarted (or, worse, turned off, never to return). Again following the example of incoming events: in that case, we kick off a processing job as soon as a worker discovers itself to be a "federation inbound" worker by receiving a /send request. Probably we could do the same here on a /_matrix/client/v3/rooms/.*/(send|join|invite|leave|ban|unban|kick) request?

@reivilibre
Copy link
Contributor

reivilibre commented Nov 11, 2022

The bulk of this issue comes down to communicating un-partial-stating events and rooms between workers.

In #12989 where we also need to communicate the un-partial-stating of a room, general consensus seems to be that we should have a new stream with its own numbering (NOT: complexifying the events stream) and include that in sync tokens.

The above comment talks of a stream for un-partial-stated events. I think the two could probably be combined but I think it would be messier than just having two of them, frankly, so I propose we have two separate streams here.

New issues for the streams: #14418 (events), #14419 (rooms).

#14544 has been opened for the enhancement of allowing non-master workers to perform the partial room re-sync process.

@DMRobertson
Copy link
Contributor

We consider this done as of #14752.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A-Federated-Join joins over federation generally suck T-Enhancement New features, changes in functionality, improvements in performance, or user-facing enhancements.
Projects
None yet
Development

No branches or pull requests

4 participants