-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
src: fix race on modpending by migrating it to a thread local #21611
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
Just fyi, there are some alternative approaches that address this problem more completely – #20759 goes a bit into this direction (or at least touches a lot of this code), and I wanted to fix up addaleax/node@541ec98 to work as a “proper” solution to this problem.
In particular, Node shouldn’t load addons from Workers that don’t explicitly state that they support being loaded in such an environment – Many addons create some kind of libuv handles, and don’t install proper cleanup hooks for when the Worker shuts down that close those handles
If you want to help with that, that would be appreciated a lot 💙
src/node.cc
Outdated
@@ -1144,7 +1143,7 @@ extern "C" void node_module_register(void* m) { | |||
mp->nm_link = modlist_linked; | |||
modlist_linked = mp; | |||
} else { | |||
modpending = mp; | |||
Environment::GetThreadLocalEnv()->modpending = mp; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost any use of Environment::GetThreadLocalEnv()
is going to be buggy, because Node.js generally seeks to support embedders who create multiple Environment
s per thread :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@addaleax Thanks, I was unaware of that. I've migrated to using a thread local instead, which should properly support multiple Environment
s per thread.
18b1cde
to
5c3e676
Compare
Fixes a rare race condition on modpending when two native modules are loaded simultaneously on different threads by migrating modpending to be a member of Environment.
5c3e676
to
7f2c8bf
Compare
@addaleax modules which rely on Assuming the node environment on such a thread is correctly set up to be indistinguishable from a single-threaded node environment such addons will work correctly because the one thread where they do load is the only thread where they need to work and they will not subsequently load because the library constructors will not fire again. In contrast, when using a special symbol a module must support multiple init and module-instance-local storage, because such a symbol will be found every time the module is loaded, no matter what thread it's on and no matter that it's multiple times on a single thread. So, I think landing this PR will help us, and will not cause any more breakage than the current inability to load a module more than once already causes. |
src/node.cc
Outdated
@@ -185,7 +185,8 @@ static int v8_thread_pool_size = v8_default_thread_pool_size; | |||
static bool prof_process = false; | |||
static bool v8_is_profiling = false; | |||
static bool node_is_initialized = false; | |||
static node_module* modpending; | |||
static uv_once_t init_once = UV_ONCE_INIT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name of this variable should indicate that it refers to thread_local_modpending
– so, like, maybe init_modpending_once
?
src/node.cc
Outdated
@@ -1258,6 +1259,10 @@ inline napi_addon_register_func GetNapiInitializerCallback(DLib* dlib) { | |||
reinterpret_cast<napi_addon_register_func>(dlib->GetSymbolAddress(name)); | |||
} | |||
|
|||
void InitDLOpenOnce() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be InitModpendingOnce
, not InitDLOpenOnce
.
I was able to cause
by running the following code: /* index.js */
const assert = require('assert');
const {
Worker, isMainThread, parentPort
} = require('worker_threads');
if (isMainThread) {
let robinHood;
(new Worker(__filename)).on('message', (data) => {
if (data === 'ready') {
robinHood = JSON.stringify(require('./build/Release/robin_hood'));
} else {
console.log(data + '\nvs.\n' + robinHood);
assert.notStrictEqual(data, robinHood);
}
});
} else {
parentPort.postMessage('ready');
parentPort.postMessage(JSON.stringify(require('./build/Release/friar_tuck')));
} where /* robin_hood.c */
#include <assert.h>
#include <node_api.h>
static napi_value Init(napi_env env, napi_value exports) {
napi_status status;
napi_value result;
status = napi_create_uint32(env, 42, &result);
assert(status == napi_ok);
status = napi_set_named_property(env, exports, "answer", result);
assert(status == napi_ok);
return exports;
}
NAPI_MODULE(NODE_GYP_MODULE_NAME, Init) and /* friar_tuck.c */
#include <assert.h>
#include <node_api.h>
static napi_value Init(napi_env env, napi_value exports) {
napi_status status;
napi_value result;
status = napi_create_string_utf8(env, "good", NAPI_AUTO_LENGTH, &result);
assert(status == napi_ok);
status = napi_set_named_property(env, exports, "question", result);
assert(status == napi_ok);
return exports;
}
NAPI_MODULE(NODE_GYP_MODULE_NAME, Init) in a shell loop with while node --experimental-worker ./index.js; do echo -n ''; done It's not deterministic though so it doesn't serve as a good test. |
@addaleax I believe that using In conclusion, using |
@gabrielf that all makes sense to me. I guess we just need to get this PR updated based on your comments and the it looks good to me. |
@rpetrich do you think you might be able to find the time to address my comments so we can move the PR forward? |
@gabrielschulhof Thanks for pushing to the PR. I'd missed the earlier comments about naming. |
@rpetrich happy to help! |
@mhdawson could you PTAL? |
@nodejs/addon-api could someone else please also take a look? |
@nodejs/collaborators could someone please have a look? |
CI resumed as https://ci.nodejs.org/job/node-test-pull-request/16483/ |
FreeBSD rebuild: https://ci.nodejs.org/job/node-test-commit-freebsd/19726/ |
I think FreeBSD is back to green. Let's try a Resume Build: https://ci.nodejs.org/job/node-test-pull-request/16501/ |
Landed in 3814534. |
Fixes a rare race condition on modpending when two native modules are loaded simultaneously on different threads by storing it thread- locally. PR-URL: #21611 Reviewed-By: Gabriel Schulhof <gabriel.schulhof@intel.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
Fixes a rare race condition on modpending when two native modules are loaded simultaneously on different threads by storing it thread- locally. PR-URL: #21611 Reviewed-By: Gabriel Schulhof <gabriel.schulhof@intel.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
Fixes a rare race condition on modpending when two native modules are loaded simultaneously on different threads by storing it thread- locally. PR-URL: #21611 Reviewed-By: Gabriel Schulhof <gabriel.schulhof@intel.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
Fixes a rare race condition on
modpending
when two native modules are loaded simultaneously on different threads by migratingmodpending
to be a thread local.Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passes