-
-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEATURE: improve performance for publish / discard workspaces #4286
base: 9.0
Are you sure you want to change the base?
Conversation
... this ensures that during publishing of nodes, we do not fork sub-processes for event processing (for every event). This speeds up the system usually by at least factor 2-3 during these operations. It *could* have some side-effects if caches were not properly cleared, but at least I did not see any yet (and we run tests often with this flag). Resolves: #4285
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to admit that I struggle a bit with this direction as it turns this somewhat hacky global state to "user land" and I can already see lots of places where this will sneak in to optimize performance eventually turning big parts of the system into a single-process which makes it more fragile I'm afraid.
I'm currently experimenting with a greatly improved catchup mechanism that can be interacted with via websockets and I hope that this will speed up performance drastically.
@skurfuerst are you OK with waiting a bit longer to see if my approach actually works out?
@bwaidelich sure :) go ahead :):) |
I agree with @bwaidelich and share the concern that the synchronous option will sneak as "performance improvement" into client code and other packages as well. Currently speeding up the catchup is a big concern, as we block nearly EVERY command we issue. That is mostly caused because we need to update the projections so our soft constraints work (but bastian told me about an alternative way: in memory content graph projection, maybe you can elaborate further @bwaidelich ) |
I'll try (: a) Graph for the write sideA simplified version of the graph that is only concerned about the hierarchy (not about properties etc) that is kept up to date. I tested this in a PoC with ~20k pending changes in one transaction and it seemed to work fine. b) Dynamic Consistency Boundarya new concept (long post!) that would basically allow build up a tree like
and then do constraint checks only in the affected subtree. Probably this wasn't really clear. But the main benefit compared to a) is that you could move and change different parts of the tree at the same time without risking race conditions (leading to failing commands) That's all for the future though. For 9.0 I would like to tweak the catch up handling at least: |
btw: the InMemory-Graph I mentioned for a potential "compraction" mechanism: With it we could rebuild a graph at any version and create the minimal amounts of events from the result. |
…-during-individual-publish
@skurfuerst I think this one is obsolete with our in-process (sync) CR, right? |
neos ui part: neos/neos-ui#3494
... this ensures that during publishing of nodes, we do not fork sub-processes for event processing (for every event). This speeds up the system usually by at least factor 2-3 during these operations.
It could have some side-effects if caches were not properly cleared, but at least I did not see any yet (and we run tests often with this flag).
Resolves: #4285
Related: #5303