-
Notifications
You must be signed in to change notification settings - Fork 146
Event bus all-the-things #79
Comments
Here's my top-of-the-head plan.
This pattern gives us things like:
|
I read that as you want one db process and route every db access through the event bus. If that is what you mean then I'm currently 👎 on that because I believe this will have a negative impact on performance. I believe db access needs to be as fast as it can be and funneling that through the event bus will work against that goal. I think the other route, using an alternative db (e.g. RocksDB) that allows multiple processes to read simultaneously is more promising.
I see this process mostly as the
I think synchronization should become a plugin that runs in an isolated process and expose several sync related events on the event bus. (e.g.
That's already running in it's own process as an isolated plugin. Did you just mean to list it as a process? Also, I want to raise awareness that we do not have performance metrics for the EventBus under high load yet. I believe that the current implementation does not handle high frequency traffic well enough just yet. And for some things like routing all db access through it, I'm highly skeptical if it could ever be a suitable option no matter how much we optimize it. |
Regarding the database issue, Agreed that we can and should make whatever compromises are necessary to ensure performance numbers stay high, but worth pointing out that we currently already route all database access over a multiprocessing boundary... but whether the existing mechanism or the event bus have measurably different performance footprints is something we would want to check. My understanding is that the current lahja model is that everything flows through a central bus meaning two hops. Maybe we need to establish an abstraction for direct connections to allow the database interactions to be direct connections with only a single hop. |
In fact, having the event bus working without that central coordination layer was on my radar as well That said, if there are multiple consumers for an event (which the one producing the event, in general, should not even have knowledge about) then from the perspective of the process that is broadcasting the event, it is more efficient to just have one hop to the central coordinator and then let the central coordinator make n connections to m different endpoints. But without getting lost in the specific details, there's a lot we can optimize for efficiency. I just wanted to raise awareness that it may not be as efficient yet or at least we don't know. I honestly think we should iterate on the RocksDB work you've been doing. Did that already make use of the fact that multiple processes could read simultaneously? Or did that just change the db without exploiting that feature yet? |
Just had a thought. If the interactions between synchronization and peers are changed to be event bus based then it gets a lot easier to mock out the peer side of things. This extends to a bunch of other things as well. Maybe the higher level thing here is that the event bus can act as a way to decouple our various sections of functionality from their implementations. I think this has some positive and negative implications but I wanted to drop this here to make note of it. |
@pipermerriam yes, that is absolutely true and one of the main upsides of the eventbus pattern in general. You get very loose coupling. Like, events can become your main interface and it becomes easy to swap out functionality against different implementations. The downside to this is complexity and "fragility" (not sure if it's the best word to describe it) which can be mitigated with more tests. I'm a fan of the pattern and have used it in many code bases. Coincidentally I learned today that embark (truffle competitor by status) is using the same pattern for the same reasons (flexibility, plugin all the things) |
Just cross-linking that I've started to put a benchmark together and so far it revealed interesting things such as dropped events. |
What is wrong?
Our use of the proxy object pattern has had continued fallout and un-intended results. Specific cases that I'm thinking about are:
How can it be fixed
I think we need to go ahead and get everything over onto an explicit event bus pattern. Please discuss.
The text was updated successfully, but these errors were encountered: