-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce a default synchronization context for wasm #72652
Conversation
Tagging subscribers to 'arch-wasm': @lewing Issue DetailsThis PR introduces a JSSynchronizationContext that is installed automatically on the main thread when we initialize the bindings. It implements both Send and Post on top of emscripten's main thread call proxying API when you use it from workers, and on the main thread it just invokes the callback immediately. I tried to keep the implementation as simple as possible without making it slow enough to produce a meaningful performance regression, since things like await will use this by default once it's merged. The queue of jobs is maintained in managed code and pumped by a dispatcher, and if we notice that the dispatcher is not currently waiting to run when we add something to the queue, we kick off a request to run the dispatcher. This means that worst-case we will run the dispatcher once per work item, but if a bunch of work items end up in the queue, they can get serviced efficiently by a single invocation of the dispatcher. And for calls from the main thread the performance should be equivalent to what it was before due to the fast-path.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a pass through the concurrent logic and it seems ok (and I will take a more detailed look if we decide to go with it), but I'd like to understand first if it's possible to use some higher-level abstraction instead. We're setting up a channel that workers write into and that the main thread reads from. Can we use an actual channel?
...ervices.JavaScript/src/System/Runtime/InteropServices/JavaScript/JSSynchronizationContext.cs
Outdated
Show resolved
Hide resolved
...ervices.JavaScript/src/System/Runtime/InteropServices/JavaScript/JSSynchronizationContext.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The design makes sense to me
Found a few minor things in the implementation, but it's cleanup, not deep issues.
...ervices.JavaScript/src/System/Runtime/InteropServices/JavaScript/JSSynchronizationContext.cs
Outdated
Show resolved
Hide resolved
...ervices.JavaScript/src/System/Runtime/InteropServices/JavaScript/JSSynchronizationContext.cs
Outdated
Show resolved
Hide resolved
...ervices.JavaScript/src/System/Runtime/InteropServices/JavaScript/JSSynchronizationContext.cs
Outdated
Show resolved
Hide resolved
e49b79e
to
1fa99ef
Compare
|
||
public override void Post (SendOrPostCallback d, object? state) { | ||
var workItem = new WorkItem(d, state, null); | ||
while (!Queue.Writer.TryWrite(workItem)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When would this ever return false? The channel is unbounded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation was unclear on when this could fail, and I didn't have a chance to read it yet since there are multiple queue implementations. Wasn't sure if it was a wait-free thing where TryWrite failing means 'try again, someone had the lock'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @IEvangelist a chance to improve the docs! Though this might be an API doc.
...ervices.JavaScript/src/System/Runtime/InteropServices/JavaScript/JSSynchronizationContext.cs
Outdated
Show resolved
Hide resolved
...e.InteropServices.JavaScript/src/System/Runtime/InteropServices/JavaScript/Legacy/Runtime.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except the managed-exports.ts bits which I don't understand
/azp run runtime-wasm |
Azure Pipelines successfully started running 1 pipeline(s). |
Lots of failures on the wasm lanes but none of them look related to this PR. Not sure what to do about that. I see some temp file exists errors that seem to be some sort of problem with the build machine/build environment (I didn't touch native IO at all) and then a bunch of problems in websockets that appear related to SharedArrayBuffer, which I also didn't change. |
…ion calls back to the browser thread and queues them as background jobs. Exercise sync context in threads sample to display the complete progress indicator Undo changes to normal browser sample Update a comment and address PR feedback Remove prints and update comments Cleanup debugging changes Migrate install method to new marshaling API and clean up merge damage
This PR introduces a JSSynchronizationContext that is installed automatically on the main thread when we initialize the bindings. It implements both Send and Post on top of emscripten's main thread call proxying API when you use it from workers, and on the main thread it just invokes the callback immediately.
I tried to keep the implementation as simple as possible without making it slow enough to produce a meaningful performance regression, since things like await will use this by default once it's merged. The queue of jobs is maintained in managed code and pumped by a dispatcher, and if we notice that the dispatcher is not currently waiting to run when we add something to the queue, we kick off a request to run the dispatcher. This means that worst-case we will run the dispatcher once per work item, but if a bunch of work items end up in the queue, they can get serviced efficiently by a single invocation of the dispatcher. And for calls from the main thread the performance should be equivalent to what it was before due to the fast-path.