-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
worker.postMessage
performance
#38780
Comments
I did a quick flamegraph on the benchmark for this ( You can see that (expectedly) the V8 serialization and deserialization machinery does come with some overhead, and that calling into Node.js from C++ when a message is received also comes with quite some overhead. In terms of improving this, I did look at that a few times, and… I think the low-hanging fruit here are taken, but obviously, a lot of this is worth putting some effort into because things like the call-into-JS machinery affect all of Node.js, not just MessagePort performance. For example, the We could make the We could pass the In the end, most things we can do here only shave off a single percent or two. I don’t know if that’s the kind of thing we’re looking for.
If you are in a situation where you can use
Fast than … ? If you’re comparing it against SAB + Atomics, then, no, that’s never going to be the case. |
The receiving side is not that big of a problem. I'm mostly concerned about the overhead on the main thread. |
Can the serialization overhead be reduced in someway? I would have hoped that passing string would have low serialization overhead (i.e. just a an malloc + memcpy). Does passing some form of typed/shared buffer have better perf? |
Most likely - yes. I'd assume it also depends on the string itself (whether it's a tree or flattened already for example). |
@ronag If you’re worried about the sending side… I don’t know, I guess we could create a fast path for people who only transfer strings/typed arrays that skips the serialization steps, and writes data a bit more directly. It is true that the V8 serializer is adding somewhat undue overhead in that case. This would definitely make the code quite a bit more complex, though, We can also see if we can avoid |
That's probably just a good idea in general.
Yeah I was looking at that a while back and decided against doing anything precisely because of the additional complexity. What I kept coming back to is the idea that maybe what would be most helpful is actually teaching v8 how to handle (and optimize) the async context itself so that we wouldn't necessarily have to allocate anything additional per async call into JS. But that's a larger discussion than just postMessage. Fiddling around with a few percentage points is not going to make a huge difference, no, but I do think it's worthwhile where it's not too much effort. |
Since it is much more common to send messages than to add or remove ports from a sibling group, using a rwlock is appropriate here. Refs: nodejs#38780 (comment)
I think doing a fast path for buffer and string would make sense. |
A little of topic but looking at the source code I'm curious as to why we would even need a Also most of the lists in Also most of the fields in struct Data {
std::vector<std::shared_ptr<v8::BackingStore>> array_buffers_;
std::vector<std::shared_ptr<v8::BackingStore>> shared_array_buffers_;
std::vector<std::unique_ptr<TransferData>> transferables_;
std::vector<v8::CompiledWasmModule> wasm_modules_;
};
MallocedBuffer<char> main_message_buf_;
std::optional<Data> data_; // or std::unique_ptr<Data> data_; |
Using flattened strings seems to be significantly faster. |
Well, yes, but we do want
I don’t think that’s a surprise, although I would expect that serializing them is also a flattening operation. |
Yes, but in my case I was doing something like: worker.postMessage(`ADD/REMOVE_${data}`) where However, by doing some magic I managed to change it to: worker.postMessage(data) // TOGGLE ON/OFF instead off ADD/REMOVE and hope we don't have any state bugs. It's a bit unfortunate that I don't have a performant way to send meta with a payload. Before this I also did: worker.postMessage({ data, type: 'add/remove' }) Improving this could also be relavant for piscina. I guess the most common use for piscina it to offload the main thread. |
Since it is much more common to send messages than to add or remove ports from a sibling group, using a rwlock is appropriate here. Refs: #38780 (comment) PR-URL: #38783 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com>
Since it is much more common to send messages than to add or remove ports from a sibling group, using a rwlock is appropriate here. Refs: #38780 (comment) PR-URL: #38783 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com>
Refs: nodejs#38780 (comment) PR-URL: nodejs#38784 Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
Refs: #38780 (comment) PR-URL: #38784 Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
Refs: #38780 (comment) PR-URL: #38784 Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
Refs: #38780 (comment) PR-URL: #38784 Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
Refs: #38780 (comment) PR-URL: #38784 Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
Refs: nodejs#38780 (comment) PR-URL: nodejs#38784 Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Minwoo Jung <nodecorelab@gmail.com>
I'm using
worker.postMessage(myString)
to send data to my worker. However, I'm getting significant performance overhead frompostMessage
. I'm spending 16% of my total cpu time here. The strings are usually quite short but can be up to 256 chars.Where does the overhead of
postMessage
mostly come from and is there ways to improve it? I might be intersted in trying to improve it if someone could point me in the right direction.I've been consdering using a
SharedArrayBuffer
and writing the strings into the buffer and thenAtomics.wait/notify
, However, that feels wrong as it basically just re-implements whatpostMessage
could/should be doing anyway. IsMessageChannel
faster?The text was updated successfully, but these errors were encountered: