Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optinally wrap msgpack_object instead of converting to V8 object #40

Open
comick opened this issue Feb 26, 2017 · 0 comments
Open

Optinally wrap msgpack_object instead of converting to V8 object #40

comick opened this issue Feb 26, 2017 · 0 comments

Comments

@comick
Copy link

comick commented Feb 26, 2017

Currently msgpack.unpack(), converts the msgpack_object from C/C++ lib into a V8 object visiting the msgpack_object entirely with recursion.
Such an approach is known to be ~ 3.5 times slower than JSON.parse(). I could get approximately the same performance on nodejs 7.5 with msgpack 1.0.2.

In msgpack.cc, the recursive implementation of msgpack_to_v8 repeatedly calls Nan::New.
Building the V8 object is the most expensive part and disabling V8 object creation (C library msgpack_unpack) still being called, I could get encouraging numbers:

json: 723.140ms msgpack: 458.431ms

median out of 10 runs of:

console.time('json');
var sj = JSON.stringify(data);
for (let i = 0; i < 500000; i++) {
    JSON.parse(sj);
}
console.timeEnd('json');
var sj = msgpack.pack(data);
console.time('msgpack');
for (let i = 0; i < 500000; i++) {
    msgpack.unpack(sj);
}
console.timeEnd('msgpack');

suggesting that most of the time is spent to build a V8 object.

In some use cases where the full object is not needed or not repeatedly accessed, better performance could be achieved wrapping the msgpack_object struct within a V8 object with getters of computed property names (not sure how transparent that can be).

If an msgpack_object representing:

var o = {"a" : 1, "b" : 2, "c" : [1, 2, 3]};

is given, o.c[1] only creates and returns a number instead of the full object structure.
Assumption is to have less overhead and performance comparable to direct use of C library.
Was this ever tried? Does it make sense at all?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant