Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zlib: prevent uncaught exception in zlibBuffer #1811

Closed
wants to merge 3 commits into from

Conversation

targos
Copy link
Member

@targos targos commented May 27, 2015

If the final buffer needs to be larger than kMaxLength, the concatenation fails and there is no way to catch the error:

buffer.js:173
    throw new RangeError('Attempt to allocate Buffer larger than maximum ' +
          ^
RangeError: Attempt to allocate Buffer larger than maximum size: 0x3fffffff bytes
    at checked (buffer.js:173:11)
    at fromNumber (buffer.js:56:51)
    at new Buffer (buffer.js:41:5)
    at Function.Buffer.concat (buffer.js:263:16)
    at Gunzip.onEnd (zlib.js:212:22)
    at emitNone (events.js:72:20)
    at Gunzip.emit (events.js:163:7)
    at endReadableNT (_stream_readable.js:890:12)
    at doNTCallback2 (node.js:437:9)
    at process._tickCallback (node.js:351:17)

Testcase : https://gist.github.com/targos/643a802e0ed4fa60f3bd

@mscdex mscdex added the zlib Issues and PRs related to the zlib subsystem. label May 27, 2015
@cjihrig
Copy link
Contributor

cjihrig commented May 27, 2015

Can you add a test?

callback(e);
} finally {
engine.close();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you do something more like:

err = null;

try {
  buf = Buffer.concat(buffers, nread);
} catch (e) {
  err = e;
}

buffers = [];
engine.close();
callback(err, buf);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@targos
Copy link
Member Author

targos commented May 27, 2015

Do you mean test that the error is correctly caught ?
I'm not sure how to do that. The unzipped data has to be more than 1GB, wouldn't it fail on hardware like Raspberry Pi ?

@cjihrig
Copy link
Contributor

cjihrig commented May 28, 2015

There is an upcoming Buffer rewrite that, if I understand correctly, will get rid of the upper limit anyway. @trevnorris, do you have any thoughts for a test here?

@trevnorris
Copy link
Contributor

The check should be there. But if we're confident that nread is the length of all buffers then only need to check it against kMaxLength, and return an error if greater than. No need for the try catch.

@targos
Copy link
Member Author

targos commented Jun 4, 2015

@trevnorris Good point, I made that change. Also added a check in the sync code to generate the same error.
I still don't know how to test it without allocating more than 1GB of buffers.

@trevnorris
Copy link
Contributor

True. And that may be a problem on ARM devices.

@rvagg Thoughts?

@rvagg
Copy link
Member

rvagg commented Jun 4, 2015

yeah, that's not going to be a happy time on the small devices, if it must be tested I guess you might want to first test for the amount of system memory available, but that's also going to be tricky for cross-platform support, the alternative is to just skip it if arch == arm, but there's also mips (tessel) and Windows 10 on rpi and Atom and ...

@chrisdickinson
Copy link
Contributor

My vote would be to make an internal module (lib/internal/smalloc, for the sake of argument) that exposes getMaxLength() & override it in the test to return a much smaller value.

@trevnorris
Copy link
Contributor

that may be possible by process.binding('smalloc').kMaxLength = <some small value> before requiring zlib. problem is that may have other unknown consequences.

@targos
Copy link
Member Author

targos commented Jun 5, 2015

@trevnorris your proposal is working, thanks! I added a test.

@trevnorris
Copy link
Contributor

One more addition to the test. Can you set it back to the original value of 0x3FFFFFFF after requiring zlib? After that let's add this to CI and see how it holds up.

@targos
Copy link
Member Author

targos commented Jun 6, 2015

Done. Can somebody start a CI?

@brendanashworth
Copy link
Contributor

@targos
Copy link
Member Author

targos commented Jun 10, 2015

All green! LGTY ?

@trevnorris
Copy link
Contributor

LGTM

@indutny have any objections to this change?

@indutny
Copy link
Member

indutny commented Jun 10, 2015

LGTM! It does look like a security concern to me. I think we may want to issue a CVE. What are your thoughts on this?

@trevnorris
Copy link
Contributor

I believe the only issue is that a client could forcefully crash a server. That's not a trivial issue by any means. I'm just not sure if it classifies as security vulnerability or not. If you think it is then cool. Let's do a CVE.

@indutny
Copy link
Member

indutny commented Jun 10, 2015

Yeah, but it is a bit trivial do so (see the test). Anyway, how should we fill a CVE cc @nodejs/tsc

@targos
Copy link
Member Author

targos commented Jun 10, 2015

I'm thinking that we can spare some time and memory by checking the current size in the loop here and here but that may be for another PR.

@trevnorris
Copy link
Contributor

Yup. Let's save that for the next one. This one has already been signed off. :-)

@targos
Copy link
Member Author

targos commented Jun 15, 2015

ping @trevnorris : can this be landed ?

@trevnorris
Copy link
Contributor

@targos I'm not sure if this should land before the CVE is created. If @indutny doesn't respond by the TSC meeting on Wednesday then I'll bring it up in the meeting.

i.e. Your patch is great. The hold up is missing process around landing a patch that should have a CVE.

@dcousens
Copy link

Great work all, definitely should have a CVE.

@indutny
Copy link
Member

indutny commented Jun 15, 2015

I think we should land it anyway, there is no point in releasing stuff without this ;) I don't have any further comments since I wasn't ever involved in the process of filing CVE. cc @piscisaureus

trevnorris pushed a commit that referenced this pull request Jun 15, 2015
If the accumulation of data for the final Buffer is greater than
kMaxLength it will throw an un-catchable RangeError. Instead now pass
the generated error to the callback.

PR-URL: #1811
Reviewed-By: Fedor Indutny <fedor@indutny.com>
Reviewed-By: Trevor Norris <trev.norris@gmail.com>
@trevnorris
Copy link
Contributor

Landed in 3806d87. Thanks for the patch.

@misterdjules
Copy link

@indutny @trevnorris @nodejs/tsc If I understand correctly, this issue also exists in node v0.10.x and node v0.12.x, so before issuing a CVE, I would suggest getting ready to backport and release the changes in this PR into those branches. Is that something you could help with?

@misterdjules
Copy link

Created nodejs/node-v0.x-archive#25536 to track that.

@trevnorris
Copy link
Contributor

@misterdjules The fix itself is straight forward, but currently the way to test uses properties on the smalloc module. Not sure how this could be tested on v0.10 without needing to use 1GB of memory.

rvagg added a commit that referenced this pull request Jun 23, 2015
PR-URL: #1996

Notable changes

* module: The number of syscalls made during a require() have been
  significantly reduced again (see #1801 from v2.2.0 for previous
  work), which should lead to a performance improvement
  (Pierre Inglebert) #1920.
* npm:
  - Upgrade to v2.11.2 (Rebecca Turner) #1956.
  - Upgrade to v2.11.3 (Forrest L Norvell) #2018.
* zlib: A bug was discovered where the process would abort if the
  final part of a zlib decompression results in a buffer that would
  exceed the maximum length of 0x3fffffff bytes (~1GiB). This was
  likely to only occur during buffered decompression (rather than
  streaming). This is now fixed and will instead result in a thrown
  RangeError (Michaël Zasso) #1811.
rvagg added a commit that referenced this pull request Jun 23, 2015
PR-URL: #1996

Notable changes

* module: The number of syscalls made during a require() have been
  significantly reduced again (see #1801 from v2.2.0 for previous
  work), which should lead to a performance improvement
  (Pierre Inglebert) #1920.
* npm:
  - Upgrade to v2.11.2 (Rebecca Turner) #1956.
  - Upgrade to v2.11.3 (Forrest L Norvell) #2018.
* zlib: A bug was discovered where the process would abort if the
  final part of a zlib decompression results in a buffer that would
  exceed the maximum length of 0x3fffffff bytes (~1GiB). This was
  likely to only occur during buffered decompression (rather than
  streaming). This is now fixed and will instead result in a thrown
  RangeError (Michaël Zasso) #1811.
@rvagg rvagg removed the tsc-agenda label Jun 24, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
zlib Issues and PRs related to the zlib subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants