-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression when enumerating objects after many deletes/inserts #31961
Comments
Sounds like #31957 is the PR that fixes this. |
Same issue here, bisecting to the same original Node.js change for the fix (v8/v8@f7771e5). The original performance regression bisects to v8/v8@6621c2d, first seen in v8 version 7.1.310, so that is consistent with all Node 12.x and 13.x versions having the bug, since Node 12.0.0 is a jump to v8 7.4.288 (Node 11.15.0 being on v8 7.0.276.38). I agree with mmarchini (#31957 (comment)) that porting back to Node 12.x's v8 would be great. I first worked around the immediate bug by replacing the very large POJO dicts with Maps, and Maps don't perform as well as Node 10 POJOs (or Node 12 POJOs for the first 8MM insert/deletes) for our workload. |
The next release of Node.js 12.x is scheduled March 24th with an rc planned for the 17th. That should allow the patch to land in a 13.x release next week. The potential fix does not land cleanly on 12.x though and I've requested a backport |
Oh, I've been tracking this down for the past few days. It regressed, it seems, as part of the V8 update for 12.16.0 (bisect pointed me to b4e8f0d, which is the first commit that compiles for that release).
Edit: I've tested 3d894d0 against the commit before it and it does fix the OP example, but does not fix the example that I have, so there must something more to it. I'll try to create a reduced test case... Edit 2: the below code seems to be consistently slower on node@12.16.1 than it used to be on node@12.15.0, but it's only 1-2%, so I'm not entirely sure it accounts for all the slowness I'm seeing: 'use strict';
const NUM_METHODS = 250;
const ITERATIONS = 10000;
const obj = {};
for (let i = 0; i < NUM_METHODS; ++i) {
obj[`item${i}`] = {};
}
console.time();
for (let i = 0; i < ITERATIONS; ++i) {
const res = Object.assign({}, obj);
}
console.timeEnd(); The above code produces roughly the same results before and after 3d894d0, so this points to another regression anyways? |
@dominykas it sounds like a different regression. Do you mind opening a separate issue? |
Node.js v12 is affected by a V8 regression where many insert/deletes on a POJO will trigger an undefined state on V8, leading to a huge performance regression which can cause each insert/delete operations on that object to take hundreds of times more. The Map structure is not affected by this performance regression (and is recommended by engine developers to deal with large key/value stores). Contexts on cls-ah are stored on a POJO, and since inserts/deletes happen often (they are triggered by async_hooks on init and destroy), applications with high RPS or with high amount of async operations will hit the threshold, turning the regression on and leading to a huge performance overhead on each new async operation. Changing to a Map will avoid that issue. Ref: nodejs/node#31961
Node.js v12 is affected by a V8 regression where many insert/deletes on a POJO will trigger an undefined state on V8, leading to a huge performance regression which can cause each insert/delete operations on that object to take hundreds of times more. The Map structure is not affected by this performance regression (and is recommended by engine developers to deal with large key/value stores). Contexts on cls-ah are stored on a POJO, and since inserts/deletes happen often (they are triggered by async_hooks on init and destroy), applications with high RPS or with high amount of async operations will hit the threshold, turning the regression on and leading to a huge performance overhead on each new async operation. Changing to a Map will avoid that issue. Ref: nodejs/node#31961
it looks like the fix landed on v12.16.2 |
Once we upgraded a service to node 12, we noticed a huge spike in CPU utilization. A client library was doing many inserts/deletes on an object over time.
Big thanks to @mmarchini who root caused and wrote the reproducible code.
What steps will reproduce the bug?
Inserting and deleting many properties on an Object (
{}
) will cause the issue.The follow script reproduces the issue:
On node 12.x it results something like:
While on Node 10.x it results something like:
(results from
Darwin nfml-ghermetoX5J 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/RELEASE_X86_64 x86_64
)Additional information
After a lengthy investigation we could verify that V8 version
7.8.279.23
which ships with Node.js version 12.16.x has a regression that was introduced on V8 7.x. The issue is fixed on V8 8.x and was caused by a bitfield overflow after many deletes/inserts in an object.The issue doesn't manifest when using a Map instead of a plain Object.
The text was updated successfully, but these errors were encountered: