Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Fix sync tightloop bug. #5507

Merged
merged 3 commits into from
Jul 1, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/5507.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix bug where clients could tight loop calling `/sync` for a period.
21 changes: 19 additions & 2 deletions synapse/handlers/presence.py
Original file line number Diff line number Diff line change
Expand Up @@ -1017,11 +1017,28 @@ def get_new_events(
if from_key is not None:
from_key = int(from_key)

max_token = self.store.get_current_presence_token()
if from_key == max_token:
# This is necessary as due to the way stream ID generators work
# we may get updates that have a stream ID greater than the max
# token (e.g. max_token is N but stream generator may return
# results for N+2, due to N+1 not having finished being
# persisted yet).
#
# This is usually fine, as it just means that we may send down
# some presence updates multiple times. However, we need to be
# careful that the sync stream either actually does make some
# progress or doesn't return, otherwise clients will end up
# tight looping calling /sync due to it immediately returning
# the same token repeatedly.
#
# Hence this guard where we just return nothing so that the sync
# doesn't return. C.f. #5503.
defer.returnValue(([], max_token))

presence = self.get_presence_handler()
stream_change_cache = self.store.presence_stream_cache

max_token = self.store.get_current_presence_token()

users_interested_in = yield self._get_interested_in(user, explicit_room_id)

user_ids_changed = set()
Expand Down