-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
buffer: fix DoS vector in atob #51670
Conversation
is there already a benchmark? Anyhow: legacy is not deprecated. |
I didn't write a benchmark for the node.js codebase, but here are the benchmarks from feross/buffer: Without this PR:
With this PR:
In other words, with this PR, Also, without this PR, the performance of |
@nodejs/performance Should there be benchmarks? Maybe we should still consider the perf for btoa/atob. |
I am fine with landing this (since it's an improvement) but no benchmarks. (as a side note nice seeing you back!) |
nonAsciiWhitespaceCharCount++; | ||
|
||
if (index === kEqualSignIndex) { | ||
if (ch === 0x3d) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment to what 0x3d
represents?
if (index > 4) { | ||
// The first 5 elements of `kForgivingBase64AllowedChars` are | ||
// ASCII whitespace char codes. | ||
if ((ch | val) & ~0x7f) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment to what this line is doing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, will do. It's some non-obvious bitwise magic. It's ensuring two things: that ch
is ASCII and is a valid index of kForgivingBase64AllowedChars
, and that val
is not -1
(i.e. ch
is a valid base64 character).
This line could actually be OR'd onto an accumulator and checked after the loop is complete to remove a branch, but I figure people here would be opposed to it due to the "no optimization of legacy functions" philosophy.
On that note, I was originally considering making this loop entirely branchless, but I'm guessing that definitely wouldn't be acceptable. It would also probably require a whole series of comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand. I strongly think that we should add comments to the code as well.
throw lazyDOMException('Invalid character', 'InvalidCharacterError'); | ||
} | ||
|
||
if (ch > 0x20) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment to here?
} | ||
const buf = Buffer.from(input, 'latin1'); | ||
return buf.toString('base64'); | ||
} | ||
|
||
// Refs: https://infra.spec.whatwg.org/#forgiving-base64-decode | ||
// https://infra.spec.whatwg.org/#ascii-whitespace | ||
// Valid Characters: [\t\n\f\r +/0-9=A-Za-z] | ||
// Lookup table (-1 = invalid, 0 = valid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you select -1 and 0, instead of 0 and 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment for line 1308.
It would be useful to have some clarification. @addaleax stated that these functions should not be used, and should therefore not be optimized. It seems that this PR attempts to reverse an earlier decision. Am I correct? (I am not expressing any opinion, I just want to understand.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
I really don't understand this logic. If it's never supposed to be used, why implement If it's truly the case that we're not allowed to optimize this function, then I will submit another PR which removes it, because it's irresponsible to have it in node.js given its performance issues (I still maintain that it is indeed a DoS vector). As far as I know,
The answer to all of this is no. So why is I understand that I suppose we could just use And for reference: Here is deno's implementation of atob. It's calling out to an SIMD-optimized base64 decoding function. Here is bun's implementation of atob. It's calling out to JSCore's native base64 decoder. Basically every javascript environment has an absurdly optimized version of To say that I'm not allowed to make a trivial few-line change in order to make a function 70 times faster would be ridiculous. At that point I'm about ready to give up on node.js after my 14 years of using it, because it will be clear that node.js has adopted some very strange development philosophies: problems which Bun and Deno apparently do not have. |
I'm +1 on landing this and further optimize |
@anonrig, pleased to hear you say that. I'm going to rewrite everything as a branchless loop I think. I was going to leave it as-is, but the whole "no optimizations ever" thing being brought up got me all riled up. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think we should land this one and then already think/discuss the #38433, there's no need to this function to be slower.
Or, we don't even need to land this one if we merge #38433, probably that implementation will be way faster than this one and the current implementation.
@jasnell, @addaleax, are you still against the optimization of these functions?
(Before anyone gets angry at me... I stress that I did not oppose this PR nor do I plan to. There is no point in arguing with me about it, I am just trying to gain a better understanding. Just to make sure there is no misunderstanding I will also approve the PR after this comment.) Here is what I think... The reason why we need to validate the string in JavaScript is because Buffer.from has effectively no error handling. It will just ignore any character it does not recognize.
Yet the
I think that if Please comment if you think I am wrong. |
For context: nodejs/performance#128 |
@lemire, I apologize if my post sounded like it was directed at you. I didn't mean for it to be. It was more generalized frustration towards the "no optimization" philosophy. |
@chjj I am not offended, I just wanted to make sure that there was no disagreement. |
Here is a branchless version of the loop. I don't know if everyone is okay with that, but it works well and gives a slight speedup over the current code in this PR. edit: This is the function I wanted to write, but anticipated that some people would be averse to it. |
The new function will typically be faster when you anticipate that the result is valid, which should be often the case. |
Like most other collaborators, I believe I've never stood in the way of optimizing this function, so I don't appreciate the hostility in this discussion. Anyway, ...
If you really want to throw O in here, be aware that
We rarely consider linear runtimes to be DoS vectors. If we did, one could argue that using a slow programming language interpreter or a slow CPU would be a DoS vector. There are likely many features in Node.js whose worst-case runtime far exceeds (any linear function of) the ideal runtime, yet we don't automatically consider those DoS vectors. (Also, actual security issues must be reported through HackerOne according to our security policy, and must not be disclosed through GitHub until they are resolved.)
FWIW, I think most would agree with James and Anna that in an ideal world, nobody would be using this function, so optimization would be pointless, and in this non-ideal world, optimizing it encourages the use of an undesirable API. Still, the project's focus has shifted heavily towards interoperability and performance in recent years, and this PR doesn't add a new dependency like the other PR originally did, so it seems much more likely that the community will appreciate straightforward optimizations that don't add a ton of complexity. |
True. I suppose it's still just linear at the end of the day, but I don't know a simpler way to express it. That said, I think everyone here understands what I mean when I say
The pedantic focus on big-O semantics combined with this statement makes me worry. If there were 100,000 unnecessary iterations inside the loop, would you be okay with it? It's still just linear complexity: Things like this are why I say I'm worried node.js has adopted some strange development philosophies. For a member of node.js to say and think this function is acceptable because "it's still just O(n)" is deeply concerning to me.
Again, the trivialization of this issue is concerning. Code like this would be absolutely unacceptable in my industry. People would get fired for it, it would come up in code audits, etc.
I'll try to keep it to a minimum, but I'm the kind of guy who yells at the TV, so bear with me. It's nothing personal. |
For reference, I plan to offer a solution later this year that should make this code unnecessary and greatly speed up the whole thing. The plan is to improve the base64 decoding in Buffer. With some luck, by the end of the year, Node will have the fastest decoder. :-) See #51670 (comment) |
Regardless of the constant you throw in, is the amortized/worst-case runtime not in the same order of magnitude as the average runtime? Don't get me wrong, that constant might be unacceptable from a performance point of view, but I fail to see how this qualifies as a DoS vector. Pretty much none of the string encoding and decoding functions in Node.js provide any constant-time guarantees for fixed input sizes.
I am sorry to somehow cause you concern for the project as a whole, but I did not say that there are no problems with this function or its performance, nor did I say that this function is acceptable. I merely questioned the purported DoS vector. |
Just because they're not optimized doesn't mean they're not used, the idea of not optimizing them to dissuade use was understandable and agreed upon at the time but as @tniessen said, the general approach has shifted slightly to support interoperability. These functions are the only interoperable way to do base64 en/decoding (at least until https://github.com/tc39/proposal-arraybuffer-base64 progresses) and so they'll naturally be used by universal javascript code and I wouldn't dare blocking performance improvements. I wouldn't spend efforts on zealously squeezing out every last bit of performance given that a better-suited API is on the horizon but I am +1 on landing the low-hanging fruit. |
If we decide to optimize these functions I would prefer we go with the approach in #38433 and reuse code that's shared by other buffer methods, instead of creating another optimized implementation in JS land, especially if the reused C++ version is already faster. |
Actually from a glance it seems the bottleneck comes from validation instead of actual encoding/decoding and #38433 just rolls the validation into the actual encoding/decoding as well as skipping the intermediate Buffer, so theoratically that would already be faster & use less memory (i.e. less GC pressure) than only tweaking the validation routine in JS land. |
I tried to keep the hostility to a minimum. I guess you weren't able to return the favor. Nowhere in my post did I describe what my industry was or mention PoW cryptocurrencies. I don't know you, so I can only assume you e-stalked me (or you already know who I am?). I'm not seeing any irony here. Are you implying that cycles are being "wasted" on PoW mining? If that's the case, I also owe you thanks. That was a good laugh. Anyway, all I can gather from your post is that you've never written code intended to run in a truly adversarial environment. |
@joyeecheung, agreed. Should I close this if there is consensus around this? |
People who care a lot about the performance of these functions should contribute good benchmarks. That would be most useful.
The current code seems to go from string to Buffer and back to string.
So it is not direct string to string. That might be a good idea... but we need benchmarks !!! |
IIRC @joyeecheung had mentioned that V8 doesn't give us the ability to read and construct strings "directly", and since Node ships with a custom implementation of |
As far as user inputs, no, due to the abstraction of potential string ropes, the C++ layer always need to decode the string into a flat native buffer before it can do any encoding/decoding on it, even if the string is passed directly into the JS land like what #38433 does (I think that already implements what you meant by string-to-string, even though an intermediate native buffer on the C++ side is inevitable, but it's at least lighter than a JS Buffer). With the web-interoparable APIs if one needs to do a string to string conversion they need to pair the proposed JS API with TextEncoder/TextDecoder to convert between string and Uint8Array first - in UTF8, and this is where the problem of atob()/btoa() comes in, because they use Latin-1 and there's no other option. I think there would be some overhead with that but definitely not like the performance cliff that this PR is trying to address, as long as the JS engine doesn't implement the validation with a terrible computational complexity. I don't think our goal is to make atob() or btoa() as fast as possible here, because they are already lacking on the correctness side and we have more correct APIs coming up. The question is whether they deserve at least a reasonable computational complexity. I would say rolling the validation into encoding/decoding in C++ seems to be worth it (despite leading to more code, most of that is just binding boilerplate), but extra bitwise computation in JS land (JS isn't really good for this) + primordials on the JS land is less so. |
@joyeecheung are you objecting? |
+1 ... No objections |
@mcollina No, that was non-blocking. |
]; | ||
const kEqualSignIndex = ArrayPrototypeIndexOf(kForgivingBase64AllowedChars, | ||
0x3D); | ||
/* eslint-enable no-multi-spaces, indent */ | ||
|
||
function atob(input) { | ||
// The implementation here has not been performance optimized in any way and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might need to be revised.
// The implementation here has not been performance optimized in any way and
The simdutf library (already part of Node.js) will have accelerated base64 support in the next release. Decoding will follow the WHATWG forgiving-base64 specification so that ASCII white spaces are allowed in base64 content. |
With the latest version of simdutf, you can implement atob in C++ as follows: // on success: returns a non-negative integer indicating the size of the
// binary produced, it most be no larger than 2147483647 bytes.
// In case of error, a negativ value is returned:
// * -2 indicates an invalid character,
// * -1 indicates a single character remained,
// * -3 indicates a possible overflow (i.e., more than 2 GB output).
void Base64ToBinary(const FunctionCallbackInfo<Value>& args) {
Environment* env = Environment::GetCurrent(args);
THROW_AND_RETURN_UNLESS_BUFFER(env, args.This());
SPREAD_BUFFER_ARG(args.This(), ts_obj);
THROW_AND_RETURN_IF_NOT_STRING(env, args[0], "argument");
Local<String> str = args[0]->ToString(env->context()).ToLocalChecked();
size_t offset = 0;
size_t max_length = 0;
THROW_AND_RETURN_IF_OOB(ParseArrayIndex(env, args[1], 0, &offset));
if (offset > ts_obj_length) {
return node::THROW_ERR_BUFFER_OUT_OF_BOUNDS(
env, "\"offset\" is outside of buffer bounds");
}
THROW_AND_RETURN_IF_OOB(ParseArrayIndex(env, args[2], ts_obj_length - offset,
&max_length));
if (max_length == 0)
return args.GetReturnValue().Set(0);
char* buf = ts_obj_data + offset;
size_t buflen = max_length;
if(buflen > INT32_MAX) {
return args.GetReturnValue().Set(-3);
}
int32_t written{0};
if (str->IsExternalOneByte()) { // 8-bit case
auto ext = str->GetExternalOneByteStringResource();
simdutf::result r = simdutf::base64_to_binary_safe(ext->data(), ext->length(), buf, buflen, simdutf::base64_default);
if(r.error == simdutf::error_code::SUCCESS) {
written = buflen;
} else if(r.error == simdutf::error_code::INVALID_BASE64_CHARACTER) {
written = -2;
} else if(r.error == simdutf::error_code::BASE64_INPUT_REMAINDER) {
written = -1;
} else {
written = -3;
}
} else { // 16-bit case
String::Value value(env->isolate(), str);
simdutf::result r = simdutf::base64_to_binary_safe(reinterpret_cast<const char16_t*>(*value), value.length(), buf, buflen, simdutf::base64_default);
if(r.error == simdutf::error_code::SUCCESS) {
written = buflen;
} else if(r.error == simdutf::error_code::INVALID_BASE64_CHARACTER) {
written = -2;
} else if(r.error == simdutf::error_code::BASE64_INPUT_REMAINDER) {
written = -1;
} else {
written = -3;
}
}
args.GetReturnValue().Set(written);
}
|
PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
Co-authored-by: Daniel Lemire <daniel@lemire.me> PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
This should be closed as #52381 landed. |
I am closing it. |
PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
Co-authored-by: Daniel Lemire <daniel@lemire.me> PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
Co-authored-by: Daniel Lemire <daniel@lemire.me> PR-URL: #52381 Refs: #51670 Reviewed-By: Daniel Lemire <daniel@lemire.me> Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Robert Nagy <ronagy@icloud.com> Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com> Reviewed-By: Filip Skokan <panva.ip@gmail.com>
I know the consensus is that this function shouldn't be optimized, but the
O(n*70)
worst case complexity of the loop isn't mentioned in the docs. In fact, the comments in the code don't even go into detail why the function is slow, it simply says it's "not optimized".This is a DoS vector to the unaware user and irresponsible IMO. We only found this while running benchmarks for the browserify buffer module.
We are currently trying to remove our dependency on
base64-js
by usingatob
in both node and the browser. Performance in Chromium is exceptional and faster than anything we could write in javascript. Node is a different story.The mitigation for this is absolutely trivial, and I'm scratching my head as to why this wasn't done in the first place.
Anyway, I could optimize it even further with a few more lines, but that's against the rules, so I just did the absolute bare minimum to make it more palatable to everyone here. Please reconsider.