-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature async updates #89
Conversation
This is a checkpoint. It is using an NSCondition for synchronization, which doesn't seem like the best solution, and has not cleaned up all behaviour, but gives good scrolling performance when built for release.
This still feels a bit hairy to me, but the behaviour feels easier to reason about, and performance is much improved. Also includes a bunch of cleanup and documentation improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a quick pass, not a detailed review, but I wanted to get my thoughts across early. Overall, the approach seems solid, but I have a bunch of concerns, mostly around the mutex/semaphore pair.
I'm not sure exactly how I feel about the timeout on fetch. Hopefully it's long enough it will only be triggered when something is going seriously wrong. Let's keep it as it is and then tune it based on experience.
Really excited to see this happen!
let _ = self.callback(json as AnyObject) | ||
// is notification | ||
} else { | ||
// updates get their own codepath, staying on this thread; the main thread may be blocked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a layer violation, having specific knowledge of methods at this level. It seems to me we've changed the invariant so RPC handling in general is no longer guaranteed to be on the UI thread. To me, a cleaner approach (but potentially more code and with potential for more thread violations) is for all methods that need to access main-thread state to do their own Dispatch.async dance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds good.
An adjacent point: I've been thinking about the broader question of "how do we turn RPCs into calls to the client" a bit lately in light of thinking about iOS (and in light of the serde-like Decodable
protocol in swift 4) and I'm leaning towards having there just being a Client
protocol which has methods corresponding to the various methods in the RPC protocol. In this world, we do parsing in core and call methods on the client/delegate. We'll then do away with the callback closure, and the client can be responsible for moving to the main thread as needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, this feels more principled than a callback and is consistent with what we're doing in other parts of the system, like request handling in plugins.
XiEditor/EditView.swift
Outdated
} | ||
|
||
// first pass, for drawing background selections and search highlights | ||
for lineIx in first..<last { | ||
guard let line = getLine(lineIx), line.containsReservedStyle == true else { continue } | ||
for lineIx in first...last { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this an inclusive range? Doesn't seem to match blockingGet
call above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I became very confused about how we describe line ranges while doing this work. (The LineRange
type alias was partly a response to this). There's also confusion because we calculate the line range for a given rect differently in different places (drawrect vs willScroll).
Anyway: good spot, the oob was always getting caught in the guard statement.
XiEditor/EditViewController.swift
Outdated
/// handles the `update` RPC. This is called from a dedicated thread. | ||
func updateAsync(update: [String: AnyObject]) { | ||
let inval = lines.applyUpdate(update: update) | ||
let hasUnsavedChanges = update["pristine"] as? Bool ?? false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels backwards, pristine means no unsaved changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, good catch.
@@ -14,12 +14,15 @@ | |||
|
|||
import Foundation | |||
|
|||
/// A half-open range representing lines in a document. | |||
typealias LineRange = CountableRange<Int> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Query only partly related to this PR: should we also be using CountableRange instead of Range for the invalidation ranges in this class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My desire is for LineRange
to be used anywhere we're describing a contiguous series of lines. An update to InvalSet
is included in this PR.
(CountableRange
has the specific advantage, over Range
, of being iterable.)
XiEditor/LineCache.swift
Outdated
/// A dispatch queue used to synchronize access to the underlying state | ||
fileprivate let queue = DispatchQueue(label: "io.xi-editor.cache-sync-queue") | ||
/// A marker used to indicate if the main thread is blocked. | ||
fileprivate let blockFlag = Mutex() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Flag" seems confusing name for a mutex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intention is to communicate that this isn't being used as a mutex in the traditional sense (it isn't guarding a shared resource; the queue does that) but just as a way of communicating to the update thread that it needs to signal.
(I'll continue and address the subsequent comment as well)
My initial implementation of this was using NSCondition
, but I struggled to make that work, specifically around timeouts. Apple is strongly pushing libdispatch
based approaches to concurrency, and the best documentation & examples I could find were based on dispatch queues and dispatch semaphores; but although I thought it was doable, I couldn't figure out a way to make that approach work without introducing some additional primitive.
I agree that there is a possible race here; however the worst case in the race scenario (where the main thread doesn't have the lock when the update thread checks, but grabs it before the update returns) is that the main thread blocks for the full timeout interval or until the next update arrives, whichever is less.
Another approach I could consider is trying to implement this using pthread_cond_t
, but then I have to implement my own timeout logic, which will be tricky.
Ideally there's some solution here that just involves the semaphores and I just couldn't find it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's strictly being used for signalling, it should be an atomic boolean instead of a mutex. But I'm not sure those are first class in Swift.
Another thought I had, what about putting the semaphore inside the mutex? My thinking is that this would guarantee each wait is balanced by exactly one signal. But I'm spitballing; it feels like there probably is a good concurrency primitive for this and we just don't know what it is yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've rewritten this to take the approach you've suggested of always having the wait call happen while holding the lock, which solves the problem with the race.
XiEditor/LineCache.swift
Outdated
/// Returns range of lines that have been invalidated | ||
func applyUpdate(update: [String: AnyObject]) -> InvalSet { | ||
let inval = queue.sync { state.applyUpdate(update: update) } | ||
if blockFlag.isLocked() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I haven't analyzed this carefully yet, but this feels really racy - the flag will still be locked for a little while after the wait on the semaphore succeeds. The classical metaphor for this is a "condition variable," which is associated with a mutex and so the transition the various waiting/locked/unlocked states is atomic.
I find myself super confused by concurrency in Swift, so can't say right now which concurrency primitive is best.
|
||
class XiScrollView: NSScrollView { | ||
|
||
// NOTE: overriding scrollWheel: is necessary in order to disable responsiveScrolling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get where this is coming from; responsive scrolling is a difficult tradeoff, with some unpleasant interactions (one of the biggies is that it routinely asks for lines outside the visible view, in preparation for scrolling). Fortunately, I think that tough tradeoff is going away when we do GPU drawing. In the meantime, my slight preference is to keep responsive scrolling, because (a) it is less of a change, and (b) it keeps us honest in a way, it's more of a stress test on the line-fetch logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intention, currently, is to add support for responsive scrolling back in a subsequent PR; removing it here makes the scroll behaviour much easier to reason about.
In my experiments with responsive scrolling, it really isn't much of a stress test; pre-drawing only happens when the main thread is otherwise idle. It is much less taxing on the line-fetch logic than is grabbing the scrollbar and moving it rapidly.
It also doesn't seem like it offers much to improves performance; even on my slow computer (2013 macbook air) in a release build of this branch I cannot make the editor miss a draw call while scrolling in jquery, at 44pts.
I'll start to look into bringing it back, but my preference would be to try and play around with going to layer-based drawing first; it will be easier to get that working when our behaviour is more predictable.
29dd7bc
to
7ce89ad
Compare
Thanks! This looks good to me now, and I think is a good foundation for the upcoming fast text plane. We'll see if any problems arise. |
- Fix an issue where the signal would still fire when the semaphore wait failed; - Fix an issue where we would wait on lines that didn't exist, timing out every time.
This improves scroll performance by handling updates off the main thread.