-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Remove most remaining workspace changed on UI thread #79391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove most remaining workspace changed on UI thread #79391
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
6c0511b to
7e62a44
Compare
All subscribers are working in thread-safe manners (they either take locks to clear caches, or use existing batching/threading primitives to queue new work).
Subscribers were already thread safe. I've also removed the IMPORTANT warning since it seems quite stale -- this is already in a delayed queue so it's not really clear what it meant.
I'm unable to figure out what the intent was here -- it clears the list but only on a SolutionChanged event, which is the type of event raised if multiple projects are modified at once in a single workspace change -- any other type of event would have a more specific kind. I could imagine the intent might have been for solution close, but then I can imagine scenarios where the user might have pasted a stack and now needs to switch the solution to navigate from the stack. Since this likely never ran, I'm just deleting it.
If the user expands a node in a CPS project wanting to look at the diagnostics under an analyzer, but we haven't been told about that analyzer let, we stick a WorkspaceChanged handler there to find it once it comes back. We were hopping to the UI thread to see if the CanonicalName of the item could have changed, but that's not really going to happen ever for these items, so we can just grab it once during creation and be done with it.
This looks to be safely callable from any thread.
7e62a44 to
452703e
Compare
Now that we're doing all the WorkspaceChanged notifications on background threads, that means the notifications won't be quite as batched as before; this was causing an increase in synchronizations that was creating some wasted work.
b6083e1 to
b17682a
Compare
| Selection = null; | ||
| Frames.Clear(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why was this safe/correct to remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah comment was in the previous PR of this: I suspect the intended behavior of this code was to clear out the list of frames on Solution Close; i.e. you closed the solution so you're done with it. But the code had a typo of "SolutionChanged", which doesn't make any sense at all. It means you'll clear the frame list on something like a project being loaded (which will do a SolutionChanged if it impacts project-to-project references)....but not anything else? Clearly we haven't had problems with it not being cleared on SolutionClose, and I didn't understand why that'd be good anyways (maybe you've pasted in a stack and you need to switch the solution to navigate to it?) so I just deleted this.
| // the middle. | ||
| _synchronizeWorkspaceQueue = new AsyncBatchingWorkQueue( | ||
| DelayTimeSpan.NearImmediate, | ||
| DelayTimeSpan.Short, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
250 'feels' good to me. It's less than a user could ever notice, while allowing a lot to batch up in firehose scenarios.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I imagine this will be noticeable to those users who can perfectly time opening a document right after solution load completes and can do their first OOP dependent operation within 200ms.
|
Test insertion PR: https://devdiv.visualstudio.com/DevDiv/_git/VS/pullrequest/656448 |
This is a reapply of #78778 that got reverted due to a potential regression.
Commit-at-a-time is recommended, since each use gets its own commit with further analysis there.
The remaining uses of eventing that hit the UI thread (of any kind of workspace event) are:
The last commit makes a larger change for performance to avoid a regression in our internal performance tests. To explain the change a bit of explanation is needed:
Our WorkspaceChanged events are raised with an batching work queue with a delay of zero. When we have events to raise on the UI thread, the batch handler first raises the background thread notifications, then jumps to the UI thread (if needed) and raises them there. This means that in this case, our events are roughly 'throttled' by the availability of the UI thread; so we're likely to end up with a flurry of background events, then foreground events, then background events, etc.
Usually these event handlers are feeding into other batching work queues, including the one we use to synchronize the VS Workspace to our OOP process to keep things relatively up to date. That had a delay of 50ms to batch up a request to start syncing it again. My belief is that prior to the WorkspaceChanged change, that delay didn't matter so much -- the actual batching may have been that if (say) the UI thread was only available once every 100ms to fire events, then that was the real rate limiting. Once that is freed up, then now we were creating a lot more batches. I did some tests counting the number of batches during a Roslyn solution load and it seemed to be roughly double, which matched some of the extra allocation overhead happening in StreamJsonRpc as we were synchronizing with OOP.
I have changed the batching delay for the OOP sync from 50ms to 250ms, that number chosen because that's what "Short" is in our delay choices. The excess memory allocations seen before have disappeared, making me think we're now doing fewer but larger batches. The ManagedLanguages.SolutionManagement RPS test shows a 10% reduction in ServiceHub allocations than the baseline. I don't imagine this would be particularly noticeable to a user the extra delay since that'd only impact the final sync that happens after a solution load, which isn't hugely critical. The immediate-sync that we have of a changed document is left in place.