Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Commit

Permalink
Merge branch 'rei/rss_target' into rei/rss_inc2
Browse files Browse the repository at this point in the history
  • Loading branch information
reivilibre committed Aug 27, 2019
2 parents 1ecd1a6 + baeaf00 commit 5043ef8
Show file tree
Hide file tree
Showing 61 changed files with 1,284 additions and 503 deletions.
1 change: 1 addition & 0 deletions changelog.d/5771.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Make Opentracing work in worker mode.
1 change: 1 addition & 0 deletions changelog.d/5776.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Update opentracing docs to use the unified `trace` method.
1 change: 1 addition & 0 deletions changelog.d/5845.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add an admin API to purge old rooms from the database.
1 change: 1 addition & 0 deletions changelog.d/5850.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add retry to well-known lookups if we have recently seen a valid well-known record for the server.
1 change: 1 addition & 0 deletions changelog.d/5852.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Pass opentracing contexts between servers when transmitting EDUs.
1 change: 1 addition & 0 deletions changelog.d/5855.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Opentracing for room and e2e keys.
1 change: 1 addition & 0 deletions changelog.d/5860.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Remove log line for debugging issue #5407.
1 change: 1 addition & 0 deletions changelog.d/5877.removal
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Remove shared secret registration from client/r0/register endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.
1 change: 1 addition & 0 deletions changelog.d/5878.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add admin API endpoint for setting whether or not a user is a server administrator.
1 change: 1 addition & 0 deletions changelog.d/5885.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix stack overflow when recovering an appservice which had an outage.
1 change: 1 addition & 0 deletions changelog.d/5886.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Refactor the Appservice scheduler code.
1 change: 1 addition & 0 deletions changelog.d/5893.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Drop some unused tables.
1 change: 1 addition & 0 deletions changelog.d/5894.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add missing index on users_in_public_rooms to improve the performance of directory queries.
1 change: 1 addition & 0 deletions changelog.d/5895.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add config option to sign remote key query responses with a separate key.
1 change: 1 addition & 0 deletions changelog.d/5896.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Improve the logging when we have an error when fetching signing keys.
1 change: 1 addition & 0 deletions changelog.d/5906.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Increase max display name size to 256.
1 change: 1 addition & 0 deletions changelog.d/5909.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix error message which referred to public_base_url instead of public_baseurl. Thanks to @aaronraimist for the fix!
1 change: 1 addition & 0 deletions changelog.d/5911.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add support for database engine-specific schema deltas, based on file extension.
18 changes: 18 additions & 0 deletions docs/admin_api/purge_room.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
Purge room API
==============

This API will remove all trace of a room from your database.

All local users must have left the room before it can be removed.

The API is:

```
POST /_synapse/admin/v1/purge_room
{
"room_id": "!room:id"
}
```

You must authenticate using the access token of an admin user.
20 changes: 20 additions & 0 deletions docs/admin_api/user_admin_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,3 +84,23 @@ with a body of:
}
including an ``access_token`` of a server admin.


Change whether a user is a server administrator or not
======================================================

Note that you cannot demote yourself.

The api is::

PUT /_synapse/admin/v1/users/<user_id>/admin

with a body of:

.. code:: json
{
"admin": true
}
including an ``access_token`` of a server admin.
27 changes: 25 additions & 2 deletions docs/opentracing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ It is up to the remote server to decide what it does with the spans
it creates. This is called the sampling policy and it can be configured
through Jaeger's settings.

For OpenTracing concepts see
For OpenTracing concepts see
https://opentracing.io/docs/overview/what-is-tracing/.

For more information about Jaeger's implementation see
Expand Down Expand Up @@ -79,7 +79,7 @@ Homeserver whitelisting

The homeserver whitelist is configured using regular expressions. A list of regular
expressions can be given and their union will be compared when propagating any
spans contexts to another homeserver.
spans contexts to another homeserver.

Though it's mostly safe to send and receive span contexts to and from
untrusted users since span contexts are usually opaque ids it can lead to
Expand All @@ -92,6 +92,29 @@ two problems, namely:
but that doesn't prevent another server sending you baggage which will be logged
to OpenTracing's logs.

==========
EDU FORMAT
==========

EDUs can contain tracing data in their content. This is not specced but
it could be of interest for other homeservers.

EDU format (if you're using jaeger):

.. code-block:: json
{
"edu_type": "type",
"content": {
"org.matrix.opentracing_context": {
"uber-trace-id": "fe57cf3e65083289"
}
}
}
Though you don't have to use jaeger you must inject the span context into
`org.matrix.opentracing_context` using the opentracing `Format.TEXT_MAP` inject method.

==================
Configuring Jaeger
==================
Expand Down
8 changes: 8 additions & 0 deletions docs/sample_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1027,6 +1027,14 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
#
#trusted_key_servers:
# - server_name: "matrix.org"
#

# The signing keys to use when acting as a trusted key server. If not specified
# defaults to the server signing key.
#
# Can contain multiple keys, one per line.
#
#key_server_signing_keys_path: "key_server_signing_keys.key"


# Enable SAML2 for registration and login. Uses pysaml2.
Expand Down
153 changes: 90 additions & 63 deletions synapse/appservice/scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,35 +70,37 @@ def __init__(self, hs):
self.store = hs.get_datastore()
self.as_api = hs.get_application_service_api()

def create_recoverer(service, callback):
return _Recoverer(self.clock, self.store, self.as_api, service, callback)

self.txn_ctrl = _TransactionController(
self.clock, self.store, self.as_api, create_recoverer
)
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)

@defer.inlineCallbacks
def start(self):
logger.info("Starting appservice scheduler")

# check for any DOWN ASes and start recoverers for them.
recoverers = yield _Recoverer.start(
self.clock, self.store, self.as_api, self.txn_ctrl.on_recovered
services = yield self.store.get_appservices_by_state(
ApplicationServiceState.DOWN
)
self.txn_ctrl.add_recoverers(recoverers)

for service in services:
self.txn_ctrl.start_recoverer(service)

def submit_event_for_as(self, service, event):
self.queuer.enqueue(service, event)


class _ServiceQueuer(object):
"""Queues events for the same application service together, sending
transactions as soon as possible. Once a transaction is sent successfully,
this schedules any other events in the queue to run.
"""Queue of events waiting to be sent to appservices.
Groups events into transactions per-appservice, and sends them on to the
TransactionController. Makes sure that we only have one transaction in flight per
appservice at a given time.
"""

def __init__(self, txn_ctrl, clock):
self.queued_events = {} # dict of {service_id: [events]}

# the appservices which currently have a transaction in flight
self.requests_in_flight = set()
self.txn_ctrl = txn_ctrl
self.clock = clock
Expand Down Expand Up @@ -136,13 +138,29 @@ def _send_request(self, service):


class _TransactionController(object):
def __init__(self, clock, store, as_api, recoverer_fn):
"""Transaction manager.
Builds AppServiceTransactions and runs their lifecycle. Also starts a Recoverer
if a transaction fails.
(Note we have only have one of these in the homeserver.)
Args:
clock (synapse.util.Clock):
store (synapse.storage.DataStore):
as_api (synapse.appservice.api.ApplicationServiceApi):
"""

def __init__(self, clock, store, as_api):
self.clock = clock
self.store = store
self.as_api = as_api
self.recoverer_fn = recoverer_fn
# keep track of how many recoverers there are
self.recoverers = []

# map from service id to recoverer instance
self.recoverers = {}

# for UTs
self.RECOVERER_CLASS = _Recoverer

@defer.inlineCallbacks
def send(self, service, events):
Expand All @@ -154,61 +172,63 @@ def send(self, service, events):
if sent:
yield txn.complete(self.store)
else:
run_in_background(self._start_recoverer, service)
run_in_background(self._on_txn_fail, service)
except Exception:
logger.exception("Error creating appservice transaction")
run_in_background(self._start_recoverer, service)
run_in_background(self._on_txn_fail, service)

@defer.inlineCallbacks
def on_recovered(self, recoverer):
self.recoverers.remove(recoverer)
logger.info(
"Successfully recovered application service AS ID %s", recoverer.service.id
)
self.recoverers.pop(recoverer.service.id)
logger.info("Remaining active recoverers: %s", len(self.recoverers))
yield self.store.set_appservice_state(
recoverer.service, ApplicationServiceState.UP
)

def add_recoverers(self, recoverers):
for r in recoverers:
self.recoverers.append(r)
if len(recoverers) > 0:
logger.info("New active recoverers: %s", len(self.recoverers))

@defer.inlineCallbacks
def _start_recoverer(self, service):
def _on_txn_fail(self, service):
try:
yield self.store.set_appservice_state(service, ApplicationServiceState.DOWN)
logger.info(
"Application service falling behind. Starting recoverer. AS ID %s",
service.id,
)
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
self.start_recoverer(service)
except Exception:
logger.exception("Error starting AS recoverer")

def start_recoverer(self, service):
"""Start a Recoverer for the given service
Args:
service (synapse.appservice.ApplicationService):
"""
logger.info("Starting recoverer for AS ID %s", service.id)
assert service.id not in self.recoverers
recoverer = self.RECOVERER_CLASS(
self.clock, self.store, self.as_api, service, self.on_recovered
)
self.recoverers[service.id] = recoverer
recoverer.recover()
logger.info("Now %i active recoverers", len(self.recoverers))

@defer.inlineCallbacks
def _is_service_up(self, service):
state = yield self.store.get_appservice_state(service)
return state == ApplicationServiceState.UP or state is None


class _Recoverer(object):
@staticmethod
@defer.inlineCallbacks
def start(clock, store, as_api, callback):
services = yield store.get_appservices_by_state(ApplicationServiceState.DOWN)
recoverers = [_Recoverer(clock, store, as_api, s, callback) for s in services]
for r in recoverers:
logger.info(
"Starting recoverer for AS ID %s which was marked as " "DOWN",
r.service.id,
)
r.recover()
return recoverers
"""Manages retries and backoff for a DOWN appservice.
We have one of these for each appservice which is currently considered DOWN.
Args:
clock (synapse.util.Clock):
store (synapse.storage.DataStore):
as_api (synapse.appservice.api.ApplicationServiceApi):
service (synapse.appservice.ApplicationService): the service we are managing
callback (callable[_Recoverer]): called once the service recovers.
"""

def __init__(self, clock, store, as_api, service, callback):
self.clock = clock
Expand All @@ -224,7 +244,9 @@ def _retry():
"as-recoverer-%s" % (self.service.id,), self.retry
)

self.clock.call_later((2 ** self.backoff_counter), _retry)
delay = 2 ** self.backoff_counter
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.clock.call_later(delay, _retry)

def _backoff(self):
# cap the backoff to be around 8.5min => (2^9) = 512 secs
Expand All @@ -234,25 +256,30 @@ def _backoff(self):

@defer.inlineCallbacks
def retry(self):
logger.info("Starting retries on %s", self.service.id)
try:
txn = yield self.store.get_oldest_unsent_txn(self.service)
if txn:
while True:
txn = yield self.store.get_oldest_unsent_txn(self.service)
if not txn:
# nothing left: we're done!
self.callback(self)
return

logger.info(
"Retrying transaction %s for AS ID %s", txn.id, txn.service.id
)
sent = yield txn.send(self.as_api)
if sent:
yield txn.complete(self.store)
# reset the backoff counter and retry immediately
self.backoff_counter = 1
yield self.retry()
else:
self._backoff()
else:
self._set_service_recovered()
except Exception as e:
logger.exception(e)
self._backoff()

def _set_service_recovered(self):
self.callback(self)
if not sent:
break

yield txn.complete(self.store)

# reset the backoff counter and then process the next transaction
self.backoff_counter = 1

except Exception:
logger.exception("Unexpected error running retries")

# we didn't manage to send all of the transactions before we got an error of
# some flavour: reschedule the next retry.
self._backoff()
2 changes: 1 addition & 1 deletion synapse/config/emailconfig.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ def read_config(self, config, **kwargs):
missing.append("email." + k)

if config.get("public_baseurl") is None:
missing.append("public_base_url")
missing.append("public_baseurl")

if len(missing) > 0:
raise RuntimeError(
Expand Down
Loading

0 comments on commit 5043ef8

Please sign in to comment.