-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Option to refuse federation with large rooms #2255
Comments
That sounds like a wrong solution to a real problem. Should we (well, being not a dev myself, it's a bit presomptuous to write "we" here, but I gladly give my 2 cents as a user...) not rather look for a solution for limited servers to access large rooms? Restraining rooms access goes against the idea of decentralization. It's a strong incentive to join a very large server rather than self-hosting. Who knows how large the largest rooms will eventually be? |
It might be interesting to have a common limit (at protocol level, in the form of a recommendation) on the number of state events or participating servers in a room. This limit would be lifted once a number of special servers – specialized in « broadcasting state events » to participants – declare themselves to alleviate the overall cost – at the network level – of redundancy and consistency. Another solution could be a spontaneous hierarchy between servers for the distribution of state events according to their measured capacity - pay attention to plausible non-trivial attacks here. In the event of a loss of hierarchy (faults or unavailability), the nominal functioning of "classic matrix" could be adopted as a fallback until structure emerges again. Nothing trivial to implement, sadly – just my 2 cents here, as an interested observer. |
That's why I have disabled federation with matrix.org until there is an option to set an allowed room size and a max. number of messages to be stored on the server. |
@herula we are currently working on a feature to do exactly this, so you will not have to wait too much longer. |
@neilisfragile This is good news. Thank you! |
This has been implemented via #5783 |
If you're running synapse on limited hardware, we should give server admins the option to stop their users from being able to join rooms with more than N state events or Y participating servers or something, to avoid a user innocently crippling the server
The text was updated successfully, but these errors were encountered: