-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] src,lib: policy permissions #33504
Conversation
Great to see this initiative. Has there been any discussion over per-package permissions? What is the best forum to engage in these discussions? Since Node.js has a well-defined concept of package boundaries with the package.json lookup, it is possible to consider fine-grained permissions models per-package, but it would involve providing a tailored builtin-module instance through the resolver reflecting the given policy. I know this is very different to the base implemented here, but if it were to be considered a possible future extension, perhaps there are ways to design the policy format in a way that would lend itself to package-based permissioning in this form? |
It would be possible under the model proposed here if, for instance, modules were loaded in their own V8 contexts, which is certainly possible but introduces it's own set of complexities. Imagine an API such as: const vm = require('vm')
const mod = vm.require('foo', { policy: { deny: 'fs' } }) Or, when using ESM semantics: vm.import('foo', { policy: { deny: 'fs' } }).then((mod) => { /*..*/ }); These each have their own issues and wouldn't be perfect sandboxes by any stretch as the V8 context can be escaped but it would be a path forward. |
What I was considering is something like imagining having So whether a package can do Don't mean to get too off-topic, but it's an interesting discussion. I suppose the questions are if it is worth even considering and if so what might help gain alignment here with it. I'm open to all options on both questions :) |
I think this is a move in the right direction! 👍 A few things:
Also, the prior discussion from the Security WG is worth looking at, for anyone who hasn't seen it (or the many things it links to). /cc @deian |
Potentially, although I would hesitate to try expressing those via command-line arguments. If we wanted to enable something more fine grained then we should expand on the
Yes, possibly. When the
workers / contexts created by the current context will (don't currently) have the current policy as a baseline but may have more restrictive policies. |
💯 I'm all for this approach.
Since contexts currently don't have access to |
This fails to compile with GCC:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am +1 on this
I'm also more of a fan of the approach Guy mentioned, it would be interesting to explore more OCAP within node. |
This also strikes me as a great benefit of Node over Deno, in that we do have packages and package scopes. A common scenario that I can imagine myself taking advantage of is to somehow configure Node such that my app has full permissions, but packages have no write or network access. In other words, I trust my code, and Node builtins, but not third-party code. In Node we do have the concept of the boundary between application code and third-party code, so we should take advantage of it. |
Introducing an ocap model at the package layer is going to be a significantly more involved and fundamentally invasive activity. The arrangement of our core apis across core modules is inconsistent and does not lend itself easily to that model. For instance, both process.dlopen() and process.hrtime() exist on the same module and are two very different capabilities. While I do not doubt that loading modules in protected scopes is an interesting problem, this pr is not looking to tackle that problem. |
I think this would be fantastic to do. A significant use case would be the "build" step in |
@guybedford @devsnek @GeoffreyBooth ... let's move discussions around module-scoped permissions/capabilities out of this PR and into it's own thread as it really is a much different and much more difficult topic to explore. In this PR, I am specifically looking at coarse-grained use restrictions around process, context, and worker thread level core-api capabilities (i.e. "is this API available for use in this process or not?") and will subsequently look at a separate PR that leverages the
So thinking about this, it could actually be a special permission on it's own. Specifically, something like: $ node --policy-deny=policy Denying the |
This would be perfect for the defense model I had in mind and makes a lot of sense! |
Also, I'd say we should move forward on this and leave OCAP for another iteration. I have experimented a bit with it and I found it was very hard to make it right. Having process level policy right now is, IMO, a good move as it provides value to users they can use quickly and give us a great base to iterate later. |
Interestingly, if you deny $ ./node --policy-deny=fs script.js
fs.js:1650
const stats = binding.lstat(baseLong, false, undefined, ctx);
^
Error: Access to this API has been restricted
at Object.realpathSync (fs.js:1650:29)
at toRealPath (internal/modules/cjs/loader.js:369:13)
at Function.Module._findPath (internal/modules/cjs/loader.js:673:22)
at resolveMainPath (internal/modules/run_main.js:12:25)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:65:24)
at internal/main/run_main_module.js:17:47 {
code: 'ERR_ACCESS_DENIED'
} While this makes sense because module loading uses This is support for taking one of a few different approaches:
Relatedly, doing a |
@bengl yeah I spotted that also. We have a number of such issues currently (e.g. all of our internal uses of process.env). I've been thinking through a number of approaches on this. Internal privileged operations being one possibility. Another would be to implement a kind of run level type of mechanism, where node bootstrap or internal code runs at one run level, and user code runs at another. The permissions would then be applied at the user code run level. That however gets quite a bit more complicated, so I am still evaluating the pros and the cons. |
Note that Does this happen with |
That approach would probably have to be restricted to entirely ESM module trees. Another approach here is to use the existing integrity checks as a baseline allow-list for finer-grained fs policies.
As it currently stands with this PR, you can't load an |
I totally agree, this would be a clear advantage of esm. |
cc @nodejs/modules-active-members, since this seems to impact it? |
To be clear: it would be more ideal, I think, to ensure that existing CommonJS code could still be restricted from If there's a UX concern, they can be implicitly finer-grained by simplying allowing files to be read if they're in the Integrity Checks JSON file. That being said, allowing user control over what can be read/written would allow for other use cases too. |
My next step on this will be to do a more complete write-up of the underlying model, the expectations, requirements that I see, etc. One important point on this: the initial version of this is INTENDED to be imperfect and experimental and the goal is to not solve every potential problem out of the gate before it lands but to give us something that we can iterate on and improve over time. In other words, it's an experimental feature that we will absolutely not get immediately correct. |
Agreed! |
To try and put together a more concrete suggestion from my previous comment, one way in which we could lay the path for modular capabilities might be to consider having an architecture for builtin factories: const fs = createFSBuiltin(permissions);
// create an fs builtin gated to a specific folder on the filesystem:
const fsLocal = createFSBuiltin({ gatedFolder: '/path/to/project' });
// create an fs builtin with only read capability:
const fsRead = createFSBuiltin({ read: true }); When a user does I'm not saying this should be exposed to userland, but rather that as an overall architecture approach internally it might be a path to the compartment / modular security models. This also gets around the problem of the internal fs functions having the same permissions as the rest of the world, since the permission system can be reflected in the instance boundary itself, while internal core code continues to use the high permission internal instance. To add modular restriction to such a model would then just be having a resolver that returns the appropriately permissioned builtin. I'd be interested to hear if this works with the constraints @bengl imagines here too. There are likely edge case complexities like the fact that realpath can move outside of gated folders, so the gating checks would need to be associated with the call invocation itself rather than being environment-level permissions check, but that probably isn't too bad? I certainly understand if that's an argument for such a model again being a future improvement. |
(Sorry for the late reply!) Gating on the call invocation itself would certainly be a start, but it's still missing the fact that resource access can be granted to module A, denied to module B, but then have that resource passed from A to B after the call has already been made. That being said, per-policy-set builtins, as you've described them, seems reasonable as long there's actual isolation between policy sets. This PR opts for binding-layer checks, and I think it should be allowed to proceed in that fashion, because regardless of how this ends up looking, permissions enforcement ought to be transparent to user code. |
@addalex @bengl @guybedford @targos @devsnek @vdeturckheim @mcollina ... I've started picking this back up again. I've taken a step back and reworked the central implementation and took out the actual policy enforcement for the time being so we can focus on the central mechanism. I've updated the description in the pull request description. Please take a look when you have a moment |
Definitely a move in the right direction! 👏
For instance we're talking about different types of granularity in #33504 (comment) and #33504 (comment) . If you have hooks for importing A knee-jerk API just for illustration purposes could look like: // policy.js // everything is allowed in here: fs, net, etc
const somePkgImportPolicy = require('some-pkg');
export const import = function({
module /* {id, filename, exports, ...} */,
pkg, /* optional, package.json contents or {name: '..'} for internal modules */
exportName
}) {
if (!somePkgImportPolicy(...arguments)) return false;
if (pkg.name === 'fs') return false;
if (/node_modules\/lodash/.test(module.filename)) return false;
...
return true;
}
export const call = function({
module,
pkg,
functionName,
functionArgs
}) {
if (pkg.name === 'fs' && functionName === 'readSync') return true;
if (pkg.name === 'fs' && functionName === 'writeSync' && /^/tmp\//.test(functionArgs[0])) return true;
if (pkg.name === 'internal' && functionName === 'fs.in') return false;
if (pkg.name === 'internal' && functionName === 'fs.out' return false;
...
return true;
} I've made a similar argument in the pnpm world, and I hope @zkochan still holds to his words that it was a great idea :) 👋 https://medium.com/pnpm/why-package-managers-need-hook-systems-b8125d8b3dc7
PS: I can already imagine cons with the hook system in this specific context of policies, but I'd suggest focusing a bit on the pros of it and only later on possible cons and mitigations. |
All, I've updated the implementation here with a few significant changes...
There are still a ton of unanswered questions in this, uncluding whether we want this kind of thing at all. |
@andreineculau ... the hook-based approach looks interesting but it also looks much more complicated -- in a way that I'm not sure is justified. To be used correctly the permissions mechanism really ought to be as simple as possible -- even if that borders on the simplistic. |
This is a new take at work @addaleax had previously gone through to define access control policies for Node.js. The original PR #22112 failed to make progress for a number of reasons.
This is a Work in progress that is not yet completed.
How does it work
There is no change to the default behavior of Node.js. Running the
node
binary with no command-line flags means that all permissions are granted by default as they are today.Permissions may be denied or granted via command-line flags:
(example: deny network access)
$ node --policy-deny=net
A simple JavaScript API is provided to checking grants and applying a stricter policy:
There is no user-facing API for granting permissions from JavaScript.
All
internal/*
modules have access to a specialrunInPrivilegedScope()
function that can be used to execute a synchronous function with no permission checking. For instance, suppose that Node.js is started without file system permissions (e.g.node --policy-deny=fs
). Ordinarily this would mean that Node.js itself could not function because all file system permissions would be denied. TherunInPriviledgedScope()
utility (which is injected into internal modules the same wayprimordials
andinternalBinding
is) makes it possible:The
runInPrivilegedScope()
function can be bound such that...The function passed in to
runInPrivilegedScope()
is executed synchronously and the permission scope reverts as soon as the function returns. Care must be taken because any user code that is executed synchronously within the privileged scope will run with no permission checking.At the C++ level, an equivalent mechanism is provided using the
PrivilegedScope
utility:The Permission set is hierarchical. E.g.
net
,net.in
,net.out
. Denying or grantingnet
denies or grants the entire branch.If a denied API is invoked, a
ERR_ACCESS_DENIED
Node.js error would be thrown.All permissions are set per process. Once denied, they are denied for the entire process, including all worker threads.
The permissions
inspector
(controls access to the inspector API, implicitly denied)addons
(controls access to native addons, implicitly denied)child_process
(controls access to child processes, implicitly denied)fs
(controls access to file system)fs.in
(controls access to file system read-operations)fs.out
(controls access to file system write-operations)net
(controls access to network access)net.in
(controls access to network servers)net.out
(controls access to network clients)wasi
(controls access to experimental WASI)process
(controls access to APIs that manipulate process state)signal
(controls access to sending signals to other processes)timing
(controls access to high resolution timing APIs)env
(controls access to environment / user info)workers
(controls access to worker threads)policy
(controls access to policy permission checking)Implicitly denied permissions are denied automatically when any other permission is denied.
Denying permissions within a scopeGiven the way thatrunInPrivilegedScope()
works, A denied permission will only be in effect while the scope is active. So, for instance, if I launch Node.js with net enabled, then callrunInPrivilegedScope()
and subsequently deny net, once the privileged scope pops off the stack net will be permitted again. This should be obvious but it's worth calling out.The question is whether it should be possible to deny a permission in parent scopes from a child. To keep things simple, I would say no.Additional work to come
The previous version of this PR included all of the module changes to enforce the policy. I've backed those out so we can focus first on the core mechanism here. I will be reapplying those changes later.
FAQ
This is a key question. I don't want to classify this as a security thing. It's a capability-centered policy configuration. Security for your Node.js process really needs to come from the environment (e.g. running the Node.js process in a sandboxed container or running under a restricted uid, etc). What this mechanism does is selectively disable certain APIs in the standard library so that they are not available to running scripts. One use case, for instance, would be running utilities like
npx
to run scripts that should not necessarily have the ability to open network connections by default.Yes and no. Yes in that they cover the same fundamental ideas. No in that it uses a different way of expressing permissions and is not advertised as a security mechanism. For deno, for instance, you enable file system reads for specific paths. Here, file system access is either on or off.
And in case someone is wondering if this is opened in response to deno having this, work on this mechanism started a long time ago in #22112 but was put on hold and we've been meaning to revisit it. We (NearForm) have had some use cases for this for a while that we're just now coming back around to so we wanted to move it forward.
I don't currently have plans to align it with deno's permission model but that's obviously something we could discuss if there would be enough ecosystem benefit to do so and it covers all the use cases.
The ability to grant permissions at runtime is ruled out here to prevent the ability for a script to maliciously elevate it's permissions. Because this information is stored in the V8 context, it would be possible for a native addon to manipulate this permissions so if any permissions are denied we automatically also deny native addons by default but when native addons are allowed it would be easy to manipulate things.
In most of the use scenarios we've modeled for using this, the ability to request a permission just never was a consideration. Imagine, for instance, code running on a server. These modules are going to be known to the developer, what capabilities they need will be known, and there just won't be a need to enable any kind of dynamic request mechanism.
Hahahah... um... no, not yet. Will get to that a bit later.
What Now?
I am opening this PR to start the discussion and the iteration on these ideas. Feedback is welcome and requested.
/cc @addaleax @mcollina
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passes